Politechnika Łódzka



Pobieranie 6.5 Mb.
Strona54/54
Data29.10.2017
Rozmiar6.5 Mb.
1   ...   46   47   48   49   50   51   52   53   54

Bibliografia





  1. AGARWAL, P.K., MATOUŠEK, J. (1992) Ray shooting and parametric search. 24th Annual ACM Symposium on Theory of Computing, Victoria, Canada, str. 517–526.

  2. AHA, D.W. (1989) Incremental, instance-based learning of independent and graded concept descriptions. 6th International Workshop on Machine Learning, Ithaca, NY, USA, str. 387–391.

  3. AHA, D.W., BANKERT, R.L. (1995) A comparative evaluation of sequential feature selection algorithms. 5th International Workshop on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, str. 1–7.

  4. AHA, D.W., Goldstone, R.L. (1992) Concept learning and flexible weighting. 14th Annual Conference of the Cognitive Science Society, Evanston, IL, USA, str. 534–539.

  5. Aho, A.V., Hopcroft, J.E., Ullman, J.D. (2003) Projektowanie i analiza algorytmów. Wydawnictwo Helion, Gliwice. Rok pierwszego wydania orygi­nału: 1974.

  6. Akkus, A., GÜvenir, H.A. (1996) K-nearest neighbor classification on feature projections. 13th International Conference on Machine Learning, Bari, Italy, str. 12–19.

  7. Almuallim, H., Dietterich, T.G. (1991) Learning with many irrelevant features. 9th National Conference on Artificial Intelligence, Anaheim, CA, USA, str. 547–552.

  8. Alpaydin, E. (1990) Neural Models of Incremental Supervised and Unsuper­vised Learning. PhD thesis, No 896, Department d’Informatique, Ecole Politech­nique Fédérale de Lausanne, Switzerland.

  9. Alpaydin, E. (1997) Voting over multiple condensed nearest neighbors. Artificial Intelligence Review, Vol. 11, No. 1–5, str. 115–132.

  10. Alpaydin, E., Jordan, M.I. (1996) Local linear perceptrons for classify­cation. IEEE Transactions on Neural Networks, Vol. 7, No. 3, str. 788–792.

  11. Alpaydin, E., Kaynak, C. (1998) Cascading classifiers. Kybernetika, Vol. 34, No. 4, str. 369–374.

  12. Andersen, T., Martinez, T.R. (1995) A provably convergent dynamic training method for multilayer perceptron networks. 2nd International Sympo­sium on Neuroinformatics and Neurocomputers, Rostov-on-Don, Russia, str. 77–84.

  13. Andersson, A., Davidsson, P., Lindén, J. (1999) Measure-based classi­fier performance evaluation. Pattern Recognition Letters, Vol. 20, No. 11–13, str. 1165–1173.

  14. Atkeson, C.G., MoorE, A.W., Schaal, S. (1997) Locally weighted learning. Artificial Intelligence Review, Vol. 11, No. 1–5, str. 11–73.

  15. Bailey, T., Jain, A.K. (1978) A note on distance-weighted k-nearest neighbor rules. IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-8, No. 4, str. 311–313.

  16. Baim, P. (1988) A method for attribute selection in inductive learning systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 10, No. 6, str. 888–896.

  17. Balko, S., Schmitt, I. (2002) Efficient nearest neighbor retrieval by using a local approximation technique — the active vertice approach. Preprint No. 2, Fakultät für Informatik, Universität Madgeburg, Germany.

  18. Baram, Y. (1998) Partial classification: The benefit of deferred decision. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 8, str. 769–776.

  19. BAY, S.D. (1998) Combining nearest neighbour classifiers through multiple feature subsets. 15th International Conference on Machine Learning, Madison, WI, USA, str. 37–45.

  20. Beckmann, N., Kriegel, H.-P., Schneider, R., Seeger, B. (1990) The R*-tree: An efficient and robust access method for points and rectangles. ACM SIGMOD International Conference on Management of Data, Atlantic City, NJ, USA, str. 322–331.

  21. BENTLEY, J.L. (1975) Multidimensional binary search trees used for associative searching. Communications of the ACM, Vol. 18, No. 9, str. 509–517.

  22. BENTLEY, J.L. (1979) Multidimensional binary search trees in database applica­tions. IEEE Transactions on Software Engineering, Vol. 5, No. 5, str. 333–340.

  23. Berchtold, S., Böhm, Ch., Jagadish, H.V., Kriegel, H.-P., Sander, J. (2000) Independent quantization: An index compression technique for high-dimensional data spaces. 16th International Conference on Data Engineering (ICDE), San Diego, CA, USA, str. 577–588.

  24. Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U. (1999) When is „nearest neighbor” meaningful? 7th International Conference on Database Theory (ICDT), Jerusalem, Israel, str. 217–235.

  25. Bezdek, J.C., Chuah, S.K., Leep, D. (1986) Generalized k-nearest neighbor rules. Fuzzy sets and systems, Vol. 18, No. 2, str. 237–256.

  26. BEZDEK, J.C., Reichherzer, T.R., Lim, G.S., Attikiouzel, Y. (1998) Multiple-prototype classifier design. IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-28, No. 1, str. 67–79.

  27. Bhattacharyya, G.K., Johnson, R.A. (1977) Statistical Concepts and Methods. John Wiley and Sons, New York City, NY, USA.

  28. Bieniecki, W., Grabowski, Sz., Sekulska-Nalewajko, J., Tu­rant, M., Kałużyński, A. (2002) System przetwarzania patomorfologicz­nych obrazów mikroskopowych. X Konferencja „Sieci i Systemy Informa­tyczne: teoria, projekty, wdrożenia, aplikacje”, Łódź, str. 485–498.

  29. BLANZIERI, E., RICCI, F. (1999) A minimum risk metric for nearest neighbour classification. Probability based metrics for nearest neighbour classification. 16th International Conference on Machine Learning, Bled, Slovenia, str. 22–31.

  30. BLUMER, A., EHRENFEUCHT, A., HAUSSLER, D., WARMUTH, M. (1987) Occam’s razor. Information Processing Letters, Vol. 24, No. 6, str. 377–380.

  31. Borodin, A., Ostrovsky, R., Rabani, Y. (1999) Lower bounds for high dimensional nearest neighbor search and related problems. ACM Symposium on Theory of Computing (STOC), Atlanta, GA, USA, str. 312–321.

  32. Bose, R.C., Ray-Chauduri, D.K. (1960) On a class of error correcting binary group codes. Information and Control, Vol. 3, No. 1, str. 68–79.

  33. Bottou, L., Vapnik, V. (1992) Local learning algorithms. Neural Compu­tation, Vol. 4, No. 6, str. 888–900.

  34. BRANKE, J. (1995) Evolutionary algorithms for neural network design and training. 1st Nordic Workshop on Genetic Algorithms and its Applications, Vaasa, Finland, str. 145–163.

  35. Breiman, L. (1996a) Bagging predictors. Machine Learning, Vol. 24, No. 2, str. 123–140.

  36. Breiman, L. (1996b) Bias, variance and arcing classifiers. Technical Report TR 460, Dept. of Statistics, University of California, Berkeley, CA, USA.

  37. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J. (1984) Classifi­cation and Regression Trees. Wadsworth, Belmond, CA, USA.

  38. Brill, F.Z., Brown, D.E., Martin, W.N. (1992) Fast genetic selection of features for neural network classifiers. IEEE Transactions on Neural Networks, Vol. 3, No. 2, str. 324–328.

  39. Brin, S. (1995) Near neighbor search in large metric spaces. 21st International Conference on Very Large Data Bases (VLDB), Zurich, Switzerland, str. 574–584.

  40. Brodley, C.E. (1993) Addressing the selective superiority problem: Automatic algorithm / model class selection. 10th International Conference on Machine Learning, San Mateo, CA, USA, str. 17–24.

  41. Broomhead, D.S., LOWE, D. (1988) Multi-variable functional interpolation and adaptive networks. Complex Systems, Vol. 2, No. 3, str. 321–355.

  42. Burkhard, W.A., Keller, R.M. (1973) Some approaches to best-match file searching. Communications of the ACM, Vol. 16, No. 4, str. 230–236.

  43. CAMERON-JONES, R.M. (1995) Instance selection by encoding length heuristic with random mutation hill climbing. 8th Australian Joint Conference on Artificial Intelligence, Canberra, Australia, str. 99–106.

  44. CARDIE, C. (1993) Using decision trees to improve case-based learning. 10th International Conference on Machine Learning, Amherst, MA, USA, str. 25–32.

  45. CHANG, C.L. (1974) Finding prototypes for nearest neighbour classifiers. IEEE Transactions on Computers, Vol. 2–3, No. 11, str. 1179–1184.

  46. Chaudhuri, B.B. (1996) A new definition of neighbourhood of a point in multi-dimensional space. Pattern Recognition Letters, Vol. 17, No. 1, str. 11–17.

  47. Chávez, E., Navarro, G., Baeza-Yates, R., Marroquín, J.L. (2001) Proximity searching in metric spaces. ACM Computing Surveys, Vol. 33, No. 3, str. 273–321.

  48. Chávez, E., Navarro, G. (2001) Towards measuring the searching comple­xity of metric spaces. Encuentro Nacional de Computación (ENC'01), Vol. II, str. 969–978.

  49. CHEN, C.-H., JÓŹWIK, A. (1996) A sample set condensation algorithm for the class sensitive artificial neural network. Pattern Recognition Letters, Vol. 17, No. 8, str. 819–823.

  50. Chen, S., Cowan, C.F.N., Grant, P.M. (1991) Orthogonal least squares learning for radial basis function networks. IEEE Transactions on Neural Networks, Vol. 2, No. 2, str. 302–309.

  51. Cherkauer, K.J. (1996) Human expert-level performance on a scientific image analysis task by a system using combined artificial neural networks. 13th AAAI Workshop on Integrated Multiple Learned Models for Improving and Scaling Machine Learning Algorithms, Portland, OR, USA, str. 15–21.

  52. Cherkauer, K.J., Shavlik, J.W. (1996) Growing simpler decision trees to facilitate knowledge discovery. 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, str. 315–318.

  53. Ciaccia, P., Patella, M., Zezula, P. (1997) M-tree: An efficient access method for similarity search in metric spaces. 23rd International Conference on Very Large Data Bases (VLDB), Athens, Greece, str. 426–435.

  54. Cichosz, P. (2000) Systemy uczące się. Wydawnictwa Naukowo-Techniczne, Warszawa.

  55. Clark, P., Niblett, T. (1989) The CN2 induction algorithm. Machine Learning, Vol. 3, No. 4, str. 261–283.

  56. CLARKSON, K.L. (1988) A randomised algorithm for closest-point queries. SIAM Journal on Computing, Vol. 17, No. 4, str. 830–847.

  57. Clarkson, K.L. (1999) Nearest neighbor queries in metric spaces. Discrete Computational Geometry, Vol. 22, No. 1, str. 63–93.

  58. Cognitive Systems, Inc. (1992) ReMind: Case-based Reasoning Develop­ment Shell.

  59. Cost, S., Salzberg, S. (1993) A weighted nearest neighbor algorithm for learning with symbolic features. Machine Learning, Vol. 10, No. 1, str. 57–78.

  60. COVER, T.M., HART, P.E. (1967) Nearest neighbour pattern classification. IEEE Transactions on Information Theory, Vol. IT-13, No. 1, str. 21–27.

  61. Creecy, R.H., Masand, B.M., Smith, S.J., Waltz, D.L. (1992) Trading MIPS and memory for knowledge engineering. Communication of the ACM, Vol. 35, No. 8, str. 48–63.

  62. DASARATHY, B.V. (ed.) (1991) Nearest Neighbour (NN) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press, Los Alamitos, CA, USA.

  63. Dasarathy, B.V., Sheela, B.V. (1979) A composite classifier system design: concepts and methodology. Proceedings of the IEEE (Special Issue on Pattern Recognition and Image Processing), Vol. 67, No. 5, str. 708–713.

  64. Dietterich, T.G., Bakiri, G. (1991) Error-correcting output codes: A general method of improving multiclass inductive learning programs. 9th National Conference on Artificial Intelligence (AAA-91), Anaheim, CA, USA, str. 572–577.

  65. Dietterich, T.G., Bakiri, G. (1995) Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, Vol. 2, str. 263–286.

  66. Dixon, J.K. (1979) Pattern recognition with partly missing data. IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-9, No. 10, str. 617–621.

  67. DOAK, J. (1992) An evaluation of feature selection methods and their application to computer security. Technical Report CSE-92-18, Dept. of Computer Science, University of California, Davis, CA, USA.

  68. DOBKIN, D., LIPTON, R. (1976) Multidimensional search problems. SIAM Journal on Computing, Vol. 5, No. 2, str. 181–186.

  69. DOMINGOS, P. (1997) Context-sensitive feature selection for lazy learners. Artificial Intelligence Review, Vol. 11, No. 1–5, str. 227–253.

  70. DOMINGOS, P. (1998) Occam’s two razors: the sharp and the blunt. 4th International Conference on Knowledge Discovery and Data Mining, New York City, NY, USA, str. 37–43.

  71. Domingos, P., Pazzani, M. (1996) Beyond independence: Conditions for the optimality of the simple Bayesian classifier. 13th International Conference on Machine Learning, Bari, Italy, str. 105–112.

  72. Dougherty, J., Kohavi, R., Sahami, M. (1995) Supervised and unsuper­vised discretization of continuous features. 12th International Conference on Machine Learning, Tahoe City, CA, USA, str. 194–202.

  73. DUDA, R.O., HART, P.E. (1973) Pattern Classification and Scene Analysis. John Wiley and Sons, New York City, NY, USA.

  74. Dudani, S.A. (1976) The distance-weighted k-nearest neighbor rule. IEEE Tran­sactions on Systems, Man, and Cybernetics, Vol. SMC-6, No. 4, str. 325–327.

  75. FALHMAN, S.E. (1988) Faster-learning variations on backpropagation: An empi­rical study. Connectionist Models Summer School, str. 38–51.

  76. Feng, C., King, R., Sutherland, A. (1993) Statlog: Comparison of machine learning, statistical and neural network classification algorithms. Technical report, The Turing Institute.

  77. Fisher, R.A. (1936) The use of multiple measurements in taxonomical problems. Annals of Eugenics, Vol. 7, str. 179–188.

  78. FIX, E., HODGES JR., J.L. (1951) Discriminatory analysis — nonparametric discrimination: Consistency properties. Project 21-49-004, Report No. 4, USAF School of Aviation Medicine, Randolph Field, TX, USA, str. 261–279.

  79. FIX, E., HODGES JR., J.L. (1952) Discriminatory analysis — nonparametric discrimination: Small sample performance. Project 21-49-004, Report No. 11, USAF School of Aviation Medicine, Randolph Field, TX, USA, str. 280–322.

  80. Freund, Y., Schapire, R.E. (1996) Experiments with a new boosting algorithm. 13th International Conference on Machine Learning, Bari, Italy, str. 148–156.

  81. FRIEDRICH, CH.M. (1998) Ensembles of evolutionary created artificial neural networks. 3rd On-line Conference on Soft Computing in Engineering Design and Manufacturing (WSC3). W: R. Roy, T. Furuhashi & P.K. Chawdhry, (eds.), Advances in Soft Computing, Springer, str. 288–298, URL: http://www.tussy.uni-wh.de/~chris/publications/friedrich.wsc3.pdf.

  82. Friedman, J.H. (1994) Flexible metric nearest neighbour classification. Technical report, Dept. of Statistics, Stanford University, CA, USA, 1994.

  83. Friedman, J.H., BENTLEY, J.L., FINKEL, R.A. (1977) An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software, Vol. 3, str. 209–226.

  84. FRIEDMAN, J.H., KOHAVI, R., YUN, Y. (1996) Lazy decision trees. 13th National Conference on Artificial Intelligence, Portland, OR, USA, str. 717–724.

  85. FU, K.S. (1968) Sequential methods in pattern recognition and machine learning. Academic Press, New York City, NY, USA.

  86. Fukunaga, K. (1990) Introduction to Statistical Pattern Recognition (Second Edition). Academic Press, New York City, NY, USA.

  87. GAMA, J.M.P. DA (1999) Combining Classification Algorithms. PhD thesis, Dept. of Computer Science, University of Porto, Porto, Portugal.

  88. Gates, G.W. (1972) The reduced nearest neighbor rule. IEEE Transactions on Information Theory, Vol. IT‑18, No. 5, str. 431–433.

  89. Giacinto, G., Roli, F. (1997) Adaptive selection of image classifiers. 9th International Conference on Image Analysis and Processing, Florence, Italy, str. 38–45.

  90. GOWDA, K.C., KRISHNA, G. (1979) The condensed nearest neighbour rule using the concept of mutual nearest neighbourhood. IEEE Transactions on Information Theory, Vol. IT‑25, No. 4, str. 488–490.

  91. Grabowski, Sz. (1999) Szybkie algorytmy redukcji zbioru odniesienia dla klasyfikatora typu 1-NS. VII Konferencja „Sieci i Systemy Informatyczne: teoria, projekty, wdrożenia, aplikacje”, Łódź, str. 241–250.

  92. GRABOWSKI, SZ. (2000a) Modyfikacje algorytmów redukcji zbiorów odniesie­nia dla klasyfikatora typu najbliższy sąsiad. VIII Konferencja „Sieci i Sys­temy Informatyczne: teoria, projekty, wdrożenia, aplikacje”, Łódź, str. 381–398.

  93. Grabowski, Sz. (2000b) Porównanie metod selekcji cech dla klasyfikatorów minimalno­odległościowych. VIII Konferencja „Sieci i Systemy Informatyczne: teoria, projekty, wdrożenia, aplikacje”, Łódź, str. 399–409.

  94. Grabowski, Sz. (2001a) Fast deterministic exact nearest neighbor search in the Manhattan metric. II Konferencja „Komputerowe Systemy Rozpoznawania” (KOSYR 2001), Miłków k/Karpacza, str. 375–379.

  95. Grabowski, Sz. (2001b) Experiments with the k-NCN decision rule. IX Konferencja „Sieci i Systemy Informatyczne: teoria, projekty, wdrożenia, apli­kacje”, Łódź, str. 307–317.

  96. GRABOWSKI, SZ. (2002a) Selecting subsets of features for the MFS classifier via a random mutation hill climbing technique. International IEEE Conference TCSET’2002, Lviv–Slavskie, Ukraine, str. 221–222.

  97. Grabowski, Sz. (2002b) Voting over multiple k-NN classifiers. International IEEE Conference TCSET’2002, Lviv–Slavskie, Ukraine, str. 223–225.

  98. Grabowski, Sz. (2002c) Lokalny wybór zredukowanego zbioru odniesienia. Seminarium nt. „Przetwarzanie i analiza sygnałów w systemach wizji i sterowa­nia”, Słok k/Bełchatowa, str. 142–147.

  99. Grabowski, Sz. (2002d) Towards decision rule based on closer symmetric neighborhood. Biocybernetics and Biomedical Engineering, kwartalnik PAN, przyjęte do druku.

  100. Grabowski, Sz. (2003a) A family of cascade NN-like classifiers. 7th Interna­tional IEEE Conference on Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), Lviv–Slavske, Ukraine, str. 503–506.

  101. Grabowski, Sz. (2003b) Telescope ensembles of reduced sets. III Konfe­rencja „Komputerowe Systemy Rozpoznawania” (KOSYR 2003), Miłków k/Kar­pacza, praca przyjęta.

  102. GRABOWSKI, SZ., Baranowski, M. (2002) Implementacja algorytmu szybkiego deterministycz­nego szukania najbliższego sąsiada w metryce miejskiej. X Konferencja „Sieci i Systemy Informatyczne: teoria, projekty, wdro­żenia, apli­kacje”, Łódź, str. 499–514.

  103. Grabowski, Sz., Jóźwik, A. (1999) Redukcja edytowanego zbioru odnie­sienia dla klasyfikatora typu 1-NS przy użyciu poprawionego algorytmu redukcji Tomeka. VII Konferencja „Sieci i Systemy Informatyczne: teoria, projekty, wdro­żenia, aplikacje”, Łódź, str. 219–227.

  104. Grabowski, Sz., Jóźwik, A. (2003) Sample set reduction for nearest neighbor classifiers under different speed requirements. 7th International IEEE Conference on Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), Lviv–Slavske, Ukraine, str. 465–468.

  105. Grabowski, Sz., Jóźwik, A., chen, C.-H. (2003) Nearest neighbor deci­sion rule for pixel classification in remote sensing. Praca nadesłana jako część monografii „Frontiers of Remote Sensing Info Processing”, ed. Steven Patt, World Scientific Publishing Co. Pte. Ltd., Republic of Singapore.

  106. Grabowski, Sz., sokołowska, B. (2003) Voting over multiple k-NN and k-NCN classifiers for detection of respiration pathology. III Konferencja „Komputerowe Systemy Rozpoznawania” (KOSYR 2003), Miłków k/Karpacza, praca przyjęta.

  107. Grossman, D., Williams, T. (1999) Machine learning ensembles: An empirical study and novel approach. Unpublished manuscript, University of Washington, Seattle, WA, USA, URL: http://www.cs.washington.edu/homes/
    #grossman/projects/573projects/learning.

  108. Guo, Z., Uhrig, R.E. (1992) Using genetic algorithms to select inputs for neural networks. International Workshop on Combinations of Genetic Algorithms and Neural Networks (COGANN-92), Los Alamitos, NM, USA, str. 223–234.

  109. GUTTMAN, A. (1984) R-trees: A dynamic index structure for spatial searching. ACM SIGMOD International Conference on Management of Data, Boston, MA, USA, str. 47–57.

  110. Hall, M. (1995) Selection of attributes for modeling Bach chorales by a genetic algorithm. 2nd New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems, Dunedin, New Zealand, str. 182–185.

  111. Hansen, L.K., Salamon, P. (1990) Neural network ensembles. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 10, str. 993–1001.

  112. HART, P.E. (1968) The condensed nearest neighbour rule. IEEE Transactions on Information Theory, Vol. IT-14, No. 3, str. 515–516.

  113. Hastie, T., Tibshirani, R. (1996) Discriminant adaptive nearest neighbor classification. IEEE Pattern Analysis and Machine Intelligence, Vol. 18, No. 6, str. 607–616.

  114. Hebb, D.O. (1949) The Organization of Behavior: A Neuropsychological Theory. John Wiley and Sons, New York City, NY, USA.

  115. Hecht-Nielsen, R. (1987) Counterpropagation networks. Applied Optics, Vol. 26, No. 23, str. 4979–4984.

  116. Ho, T.K. (1998a) The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 8, str. 832–844.

  117. Ho, T.K. (1998b) Nearest neighbors in random subspaces. 2nd International Workshop on Statistical Techniques in Pattern Recognition, Sydney, Australia, str. 640–648.

  118. Holmes, G., NEvill-Manning, C.G. (1995) Feature selection via the discovery of simple classification rules. Symposium on Intelligent Data Analysis, Baden-Baden, Germany.

  119. Holte, R.C. (1993) Very simple classification rules perform well on most commonly used datasets. Machine Learning, Vol. 11, No. 1, str. 63–91.

  120. Huang, Y.S., Suen, C.Y. (1995) A method of combining multiple experts for the recognition of unconstrained handwritten numerals. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 1, str. 90–93.

  121. HYAFIL, L., RIVEST, R.L. (1976) Constructing optimal binary decision trees is NP-complete. Information Processing Letters, Vol. 5, No. 1, str. 229–246.

  122. Indurkhya, N., Weiss, S.M. (1998) Estimating performance gains for voted decision trees. Intelligent Data Analysis, Vol. 2, No. 4.

  123. Indyk, P., Motwani, R. (1998) Approximate nearest neighbors: Towards removing the curse of dimensionality. 30th Annual ACM Symposium on Theory of Computing, Dallas, TX, USA, str. 604–613.

  124. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E. (1991) Adaptive mixtures of local experts. Neural Computation, Vol. 3, No. 1, str. 79–87.

  125. Jain, A.K., Duin, R.P.W., Mao, J. (2000) Statistical pattern recognition: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 1, str. 4–37.

  126. JAJUGA, K. (1990) Statystyczna teoria rozpoznawania obrazów. PWN, War­szawa.

  127. Jankowski, N. (1999) Ontogeniczne sieci neuronowe w zastosowaniu do klasyfikacji danych medycznych. Rozprawa doktorska, Katedra Metod Kompute­rowych, Uniwersytet Mikołaja Kopernika, Toruń.

  128. Jaromczyk, J.W., Toussaint, G.T. (1992) Relative neighbourhood graphs and their relatives. Proc. IEEE 80, str. 1502–1517.

  129. JOHN, G., KOHAVI, R., PFLEGER, K. (1994) Irrelevant features and the subset selection problem. 11th International Conference on Machine Learning, New Brunswick, NJ, USA, str. 121–129.

  130. JÓŹWIK, A. (1983) A learning scheme for a fuzzy k-NN rule. Pattern Recog­nition Letters, Vol. 1, No. 5–6, str. 287–289.

  131. Jóźwik, A. (2001) Porównanie klasyfikatora standardowego i równoległej sieci dwudecyzyjnych klasyfikatorów typu k-NS. IX Konferencja „Sieci i Systemy Informatyczne: teoria, projekty, wdrożenia, aplikacje”, Łódź, str. 381–389.

  132. Jóźwik, A., Chmielewski, L., Cudny, W., Skłodowski, M. (1996) A 1‑NN preclassifier for fuzzy k-NN rule. 13th International Conference on Pattern Recognition, Vienna, Austria, Vol. IV, track D, str. 234–238.

  133. Jóźwik, A., Grabowski, Sz. (2003) Metody konstrukcji szybkich klasyfika­torów do analizy obrazów optycznych. Prace Przemysłowego Instytutu Elektroniki (PAN), praca nadesłana.

  134. Jóźwik, A., Serpico, S.B., Roli, F. (1995) Condensed version of the k-NN rule remote sensing image classification. Image and Signal Processing for Remote Sensing II, EUROPTO Proceedings, SPIE, Vol. 2579, Paris, France, str. 196–198.

  135. Jóźwik, A., Stawska, Z. (1999) Wykorzystanie reguł 1-NS i k-NS oraz metody k średnich do konstrukcji szybkiego klasyfikatora. VII Konferencja „Sieci i Systemy Informatyczne: teoria, projekty, wdrożenia, aplikacje”, Łódź, str. 241–250.

  136. Jóźwik, A., Vernazza, G. (1988) Recognition of leucocytes by a parallel k‑NN classifiers. Lecture Notes of ICB Seminar, Warszawa, str. 138–153.

  137. Kalantari, I., McDonald, G. (1983) A data structure and an algorithm for the nearest point problem. IEEE Transactions on Software Engineering, Vol. 9, No. 5, str. 631–634.

  138. Katayama, N., Satoh, S. (1997) The SR-tree: An index structure for high dimensional nearest neighbor queries. ACM SIGMOD International Conference on Management of Data, Tucson, AZ, USA, str. 369–380.

  139. Kaynak, C., Alpaydin, E. (2000) Multistage cascading of multiple classi­fiers: One man's noise is another man's data. 17th International Conference on Machine Learning, Stanford University, CA, USA, str. 455–462.

  140. Keller, J.M., Gray, M.R., Givens Jr., J.A. (1985) A fuzzy k-nearest neighbor algorithm. IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-15, No. 4, str. 580–585.

  141. Kelly, J.D., Davis, L. (1991) A hybrid genetic algorithm for classification. 12th International Joint Conference on Artificial Intelligence, Sydney, Australia, str. 645–650.

  142. Kibler, D., Aha, D.W. (1987) Learning representative exemplars of concept: An initial case study. 4th International Workshop on Machine Learning, Irvine, CA, USA, str. 24–30.

  143. Kira, K., Rendell, L.A. (1992) A practical approach to feature selection. 9th International Conference on Machine Learning, Aberdeen, Scotland, str. 249–256.

  144. Kittler, J. (1978) Feature set search algorithms. W: C.-H. Chen (ed.), Pattern Recognition and Signal Processing, Sijhoff am Noordhoff, the Netherlands.

  145. Kittler, J., HateF, M., DUIn, R.P.W., matas, J. (1998) On combining classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 3, str. 226–239.

  146. Knuth, D.E. (1973) The Art of Computer Programming: Sorting and Searching. Addison-Wesley, Reading, MA, USA.

  147. Kohavi, R. (1996) Scaling up the accuracy of naïve-Bayes classifiers: A decision-tree hybrid. 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, str. 202–207.

  148. Kohavi, R., Wolpert, D.H. (1996) Bias plus variance decomposition for zero-one loss functions. 13th International Conference on Machine Learning, Bari, Italy, str. 275–283.

  149. Kohonen, T. (1986) Learning vector quantization for pattern recognition. Report TKK-F-A601, Helsinki University of Technology, Espoo, Finland.

  150. Kollen, J.F., Pollack, J.B. (1991) Back propagation is sensitive to initial conditions. W: Advances in Neural Information Processing Systems, Morgan Kaufmann, Vol. 3, San Francisco, CA, USA, str. 860–867.

  151. Koller, D., Sahami, M. (1996) Toward optimal feature selection. 13th International Conference on Machine Learning, Bari, Italy, str. 284–292.

  152. Kononenko, I. (1991) Semi-naïve bayesian classifier. 6th European Working Session on Learning, Porto, Portugal, str. 206–219.

  153. Kruskal Jr., J.B. (1956) On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematics Society, Vol. 7, No. 1, str. 48–50.

  154. Kubat, M., Chen, W.K. (1998) Weighted projection in nearest-neighbor classifiers. 1st Southern Symposium on Computation, Hattiesburg, MS, USA.

  155. Kudo, M., Sklansky, J. (2000) Comparison of algorithms that select features for pattern classifiers. Pattern Recognition, Vol. 33, No. 1, str. 25–41.

  156. Kuncheva, L.I. (1995) Editing for the k-nearest neighbors rule by a genetic algorithm. Pattern Recognition Letters, Special Issue on Genetic Algorithms, Vol. 16, No. 8, str. 809–814.

  157. Kuncheva, L.I. (2000) Cluster-and-selection method for classifier combi­nation. 4th International Conference on Knowledge-Based Intelligent Engineering Systems & Allied Technologies (KES'2000), Brighton, UK, str. 185–188.

  158. Kuncheva, L.I. (2001) Reducing the computational demand of the nearest neighbor classifier. School of Informatics, Symposium on Computing 2001, Aberystwyth, UK, str. 61–64.

  159. Kuncheva, L.I. (2002) Switching between selection and fusion in combining classifiers: An experiment. IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-32, No. 2, str. 146–156.

  160. Kuncheva, L.I., Bezdek, J.C., Duin, R.P.W. (2001) Decision templates for multiple classifier fusion. Pattern Recognition, Vol. 34, No. 2, str. 299–314.

  161. Kuncheva, L.I., Jain, L.C. (1999) Nearest neighbor classifier: Simultaneous editing and feature selection. Pattern Recognition Letters, Vol. 20, No. 11–13, str. 1149–1156.

  162. Kuncheva, L.I., Whitaker, C.J., Shipp, C.A., Duin, R.P.W. (2000) Is independence good for combining classifiers? 15th International Conference on Pattern Recognition, Barcelona, Spain, str. 168–171.

  163. Kuncheva, L.I., Whitaker, C.J. (2003) Measures of diversity in classifier ensembles. Machine Learning, Vol. 51, No. 2, str. 181–207.

  164. KURZYŃSKI, M. (1997) Rozpoznawanie obiektów — metody statystyczne. Oficyna Wydawnicza Politechniki Wrocławskiej, Wrocław.

  165. Kushilevitz, E., Ostrovsky, R., Rabani, Y. (1998) Efficient search for approximate nearest neighbor in high dimensional spaces. 30th Annual ACM Symposium on Theory of Computing, Dallas, TX, USA, str. 614–623.

  166. Langley, P. (1993) Induction of recursive Bayesian classifiers. 8th European Conference on Machine Learning, Vienna, Austria, str. 153–164.

  167. Langley, P., Iba, W. (1993) Average-case analysis of a nearest neighbor algorithm. 13th International Joint Conference on Artificial Intelligence, Cham­bery, France, str. 889–894.

  168. Langley, P., Sage, S. (1994) Induction of selective Bayesian classifiers. 10th Conference on Uncertainty in Artificial Intelligence, Seattle, WA, USA, str. 399–406.

  169. Lee, C. (1994) An instance-based learning method for databases: An information theoretic approach. 9th European Conference on Machine Learning, Catania, Italy, str. 387–390.

  170. Lee, R.C.T., Slagle, J.R., Mong, C.T. (1976) Application of clustering to estimate missing data and improve data integrity. 2nd International Conference on Software Engineering, San Francisco, CA, USA, str. 539–544.

  171. Liao, S., Lopez, M.A., Leutenegger, S.T. (2001) High dimensional similarity search with space filling curves. International Conference on Data Engineering, Heidelberg, Germany, str. 615–622.

  172. Lowe, D.G. (1995) Similarity metric learning for a variable-kernel classifier. Neural Computation, Vol. 7, No. 1, str. 72–85.

  173. MacQueen, J. (1967) Some methods for classification and analysis of multi­variate observations. 5th Berkeley Symposium on Mathematics, Statistics and Probability, University of California Press, Berkeley, CA, USA, Vol. 1, str. 281–296.

  174. MARON, O., MOORE, A.W. (1997) The racing algorithm: Model selection for lazy learners. Artificial Intelligence Review, Vol. 11, No. 1–5, str. 193–225.

  175. Marquis Condorcet, J.A. (1781) Sur les elections par scrutiny. Historie de l’Academie Royale des Sciences, str. 31–34.

  176. MarilL, T., Green, D.M. (1963) On the effectiveness of receptors in recogni­tion systems, IEEE Transactions on Information Theory, Vol. IT-9, No. 1, str. 1–17.

  177. MARTIN, J.K. (1997) An exact probability metric for decision tree splitting and stopping. Machine Learning, Vol. 28, No. 2–3, str. 257–291.

  178. Masulli, F., Valentini, G. (2000) Comparing decomposition methods for classification. 4th International Conference on Knowledge-Based Intelligent Engineering Systems & Allied Technologies, Piscataway, NJ, USA, str. 788–791.

  179. Matheus, C.J., Rendell, L.A. (1989) Constructive induction on decision trees. 11th International Joint Conference on Artificial Intelligence, Detroit, MI, USA, str. 645–650.

  180. MATOUŠEK, J. (1992) Reporting points in halfspaces. Computation Geometry: Theory and Applications, Vol. 2, No. 3, str. 169–186.

  181. McCulloch, W.S., Pitts, W.H. (1943) A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, Vol. 5, str. 115–133.

  182. MEISER, S. (1993) Point location in arrangements of hyperplanes. Information and Computation, Vol. 106, No. 2, str. 286–303.

  183. Merz, Ch., Murphy, P.M. (1996) UCI repository of machine learning databases. [http://www.ics.uci.edu/~mlearn/MLRepository.html].

  184. MICHALSKI, R.S., STEPP, R.E., DIDAY, E. (1981) A recent advance in data analysis: clustering objects into classes characterized by conjuctive concepts. W: L.N. Kanal & A. Rozenfeld (eds.), Progress in Pattern Recognition, Vol. 1, New York: North-Holland, USA, str. 33–56.

  185. Michie, D., Spiegelhalter, D.J., Taylor, C.C. (1994) Machine Lear­ning, Neural and Statistical Classification. Ellis Horwood, New York City, NY, USA.

  186. MINSKY, M.L., PAPERT, S. (1969) Perceptrons: An introduction to computa­tional geometry. MIT Press, Cambridge, MA, USA.

  187. MITCHELL, T. (1980) The need for biases in learning generalizations. Technical Report, Computer Science Dept., Rutgers University, New Brunswick, NJ, USA. W: J.W. Shavlik & T. Dietterich, eds. (1990), Readings in Machine Learning, Morgan Kaufmann, San Mateo, CA, USA, str. 184–191.

  188. MITCHELL, T. (1997) Machine Learning. MacGrew-Hill Companies, Inc., New York City, NY, USA.

  189. MOHRI, T., TANAKA, H. (1994) An optimal weighting criterion of case indexing for both numeric and symbolic attributes. AAAI Workshop on Case-Based Reasoning, Seattle, WA, USA, str. 123–127.

  190. Mollineda, R.A., Ferri, F., Vidal, E. (2000) Merge-based prototype selection for nearest neighbor classification. 4th World Multiconference on Systemics, Cybernetics and Informatics, Vol. VII (Computer Science and Enginnering), Orlando, FL, USA, str. 640–645.

  191. Moody, J., Darken, C.J. (1989) Fast learning in networks of locally-tuned processing units. Neural Computation, Vol. 1, No. 2, str. 281–294.

  192. Moreira, M., Mayoraz, E. (1998) Improved pairwise coupling classification with correcting classifiers. 10th European Conference on Machine Learning (ECML), Chemnitz, Germany, str. 160–171.

  193. Morin, R.L., Raeside, D.E. (1981) A reappraisal of distance-weighted k‑nearest neighbor classification for pattern recognition with missing data. IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-11, No. 3, str. 241–243.

  194. MUCCIARDI, A.N., GOSE, E.E. (1971) A comparison of seven techniques for choosing subsets of pattern recognition properties. IEEE Transactions on Computers, Vol. C-20, No. 9, str. 1023–1031.

  195. Murtagh, F. (1985) Multidimensional Clustering Algorithms. Physica-Verlag, Vienna, Austria, 1985.

  196. Narendra, P.M., Fukunaga, K. (1977) A branch and bound algorithm for feature subset selection. IEEE Transactions on Computers, Vol. C-26, No. 9, str. 917–922.

  197. Nieniewski, M., Chmielewski, L., Jóźwik, A., SkŁo­dowski, M. (1999) Morphological detection and feature-based classification of cracked regions in ferrites, Machine Graphics and Vision, Vol. 8, No. 4.

  198. NILSSON, N.J. (1965) Learning machines: Foundations of trainable pattern-classifying systems. McGraw Hill, New York City, NY, USA.

  199. Nosofsky, R.M., Clark, S.E., Shin, H.J. (1989) Rules and exemplars in categorization, identification, and recognition. Journal of Experimental Psycho­logy: Learning, Memory, and Cognition, Vol. 15, No. 2, str. 282–304.

  200. O’Callaghan, J.F. (1975) An alternative definition for “neighborhood of a point”. IEEE Transactions on Computing, Vol. 24, No. 1, str. 1121–1125.

  201. Opitz, D., Maclin, R. (1999) Popular ensemble methods: An empirical study. Journal of Artificial Intelligence Research, Vol. 11, str. 169–198.

  202. Pagel, B.-U., Korn, F., Faloutsos, Ch. (2000) Deflating the dimensiona­lity curse using multiple fractal dimensions. 16th International Conference on Data Engineering (ICDE), San Diego, CA, USA, str. 589–598.

  203. Parmanto, B., Munro, P.W., Doyle, H.R. (1996) Improving committee diagnosis with resampling techniques. W: D.S. Touretzky, M.C. Mozer & M.E. Hesselmo (eds.), Advances in Neural Information Processing Systems, Cambridge, MA, USA, MIT Press, Vol. 8, str. 882–888.

  204. Pazzani, M. (1996) Constructive induction of Cartesian product attributes. Conference ISIS96: Information, Statistics and Induction in Science, Melbourne, Australia, str. 66–77.

  205. Prechelt, L. (1996) A quantative study of experimental evaluations of neural network learning algorithms: Current research practice. Neural Networks, Vol. 9, No. 3, str. 457–462.

  206. Preparata, F.P., Shamos, M.I. (1985) Computational Geometry: An Intro­duction. Springer-Verlag, New York City, NY, USA.

  207. Quinlan, J.R. (1979) Discovering rules from large collections of examples: a case study. W: Expert Systems in the Micro-electronic Age, Edinburgh University Press, Edinburgh, UK, str. 168–201.

  208. Quinlan, J.R. (1993) C4.5: Programs for Machine Learning. Morgan Kauf­mann, San Mateo, CA, USA.

  209. Rastrigin, L.A., Erenstein, R.H. (1981) Method of Collective Recognition. Energoizdat, Moscow, SU (w języku rosyjskim).

  210. Rauber, T.W., Steiger-Garção, A.S. (1993) Feature selection of catego­rical attributes based on contingency table analysis. 5th Portuguese Conference on Pattern Recognition, Porto, Portugal.

  211. Raviv, Y., Intrator, N. (1996) Bootstrapping with noise: an effective regula­rization technique. Connection Science, Vol. 8, No. 3–4, str. 355–372.

  212. Ricci, F., Aha, D.W. (1998) Error-correcting output codes for local learners. 10th European Conference on Machine Learning, Chemnitz, Germany, str. 280–291.

  213. RICCI, F., AVESANI, P. (1995) Learning a local similarity metric for case-based reasoning. International Conference on Case-Based Reasoning (ICCBR), Sesimbra, Portugal, str. 301–312.

  214. RIEDMILLER, M., BRAUN, H. (1993) A direct adaptive method for faster backpropagation learning: The RPROP algorithm. IEEE Conference on Neural Networks, San Francisco, CA, USA, str. 586–591.

  215. Rimer, M.E., Martinez, T.R., Wilson, D.R. (2002) Improving speech recognition learning through lazy learning. IEEE International Joint Conference on Neural Networks (IJCNN'02), Honolulu, HI, USA, str. 2568–2573.

  216. RITTER, G.L., WOODROOF, H.B., LOWRY, S.R., ISENHOUR, T.L. (1975) An algorithm for a selective nearest neighbour decision rule. IEEE Transactions on Information Theory, Vol. IT‑21, No. 6, str. 665–669.

  217. Roli, F. (1996) Multisensor image recognition by neural networks with understandable behaviour. International Journal of Pattern Recognition and Artificial Intelligence, Vol. 10, No. 8, str. 887–917.

  218. Rosenblatt, F. (1958) The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, Vol. 65, str. 386–408.

  219. Rumelhart, D.E., Hinton, G.E., Williams, R.J. (1986) Learning internal representations by error propagation. W: D.E. Rumelhart, & J.L. McClelland, eds. (1986), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1, str. 318–362, MIT Press, Cambridge, MA, USA.

  220. Rutkowska, D., Piliński, M., Rutkowski, L. (1997) Sieci neuronowe, algorytmy genetyczne i systemy rozmyte. PWN, Warszawa–Łódź.

  221. Salzberg, S. (1991) A nearest hyperrectangle learning method. Machine Learning, Vol. 6, No. 3, str. 251–276.

  222. Sánchez, J.S., Pla, F., Ferri, F.J. (1997a) Prototype selection for the nearest neighbour rule through proximity graphs. Pattern Recognition Letters, Vol. 18, No. 6, str. 507–513.

  223. Sánchez, J.S., Pla, F., Ferri, F.J. (1997b) On the use of neighbourhood-based non-parametric classifiers. Pattern Recognition Letters, Vol. 18, No. 11–13, str. 1179–1186.

  224. Sánchez, J.S., Pla, F., Ferri, F.J. (1998) Improving the k-NCN classification rule through heuristic modifications. Pattern Recognition Letters, Vol. 19, No. 13, str. 1165–1170.

  225. Schaffer, C. (1994) A conservation law for generalization performance. 11th International Conference on Machine Learning, New Brunswick, NJ, USA, str. 259–265.

  226. Schapire, R.E. (1990) The strength of weak learnability. Machine Learning, Vol. 5, No. 2, str. 197–227.

  227. Schapire, R.E., Freund, Y., Bartlett, P., LEE, W.S. (1997) Boosting the margin: A new explanation for the effectiveness of voting methods. 14th International Conference on Machine Learning, Nashville, TN, USA.

  228. Schiffmann, W., Joost, M., Werner, R. (1993) Comparison of optimized backpropagation algorithms. European Symposium on Artificial Neural Networks (ESANN), Brussels, Belgium, str. 97–104.

  229. Shepherd, J., Zhu, X., Megiddo, N. (1999) A fast indexing method for multidimensional nearest neighbor search. SPIE Conference on Storage and Retrieval of Image and Video Databases, San Jose, CA, USA, str. 350–355.

  230. Shipp, C.A., Kuncheva, L.I. (2002) Relationships between combination methods and measures of diversity in combining classifiers. Information Fusion, Vol. 3, No. 2, str. 135–148.

  231. SHORT, R.D., FUKUNAGA, K. (1980) A new nearest neighbour distance measure. 5th IEEE International Conference on Pattern Recognition, Miami Beach, FL, USA, str. 81–86.

  232. SHORT, R.D., FUKUNAGA, K. (1981) The optimal distance measure for nearest neighbour classification. IEEE Transactions on Information Theory, Vol. IT-27, No. 5, str. 622–627.

  233. Sierra, B., Larrañaga, P., Inza, I. (2000) K Diplomatic Nearest Neigh­bour: giving equal chance to all existing classes. Journal of Artificial Intelligence Research.

  234. Singh, M., Provan, G.M. (1996) Efficient learning of selective Bayesian network classifiers. 13th International Conference on Machine Learning, Bari, Italy, str. 453–461.

  235. SKALAK, D.B. (1994) Prototype and feature selection by sampling and random mutation hill climbing algorithms. 11th International Conference on Machine Learning, New Brunswick, NJ, USA, str. 293–301.

  236. Skalak, D.B. (1997) Prototype Selection for Composite Nearest Neighbor Classifiers. PhD thesis, Dept. of Computer Science, University of Massachusetts, Amherst, MA, USA.

  237. Skubalska-Rafajłowicz, E., Krzyżak, A. (1995) Data sorting along a space filling curve for fast pattern recognition. 2nd International Symposium on Methods and Models in Automation and Robotics, Międzyzdroje, Poland, Vol. 1, str. 339–344.

  238. Skubalska-Rafajłowicz, E., Krzyżak, A. (1996) Fast k-NN classifica­tion rule using metric on space-filling curves. 13th International Conference on Pattern Recognition, Vienna, Austria, str. 121–125.

  239. Sneath, P.H.A., Sokal, R.R. (1973) Numerical Taxonomy. W.H.Freeman & Co, San Francisco, CA, USA.

  240. SolliCH, P., Krogh, A. (1996) Learning with ensembles. How overfitting can be useful. W: D.S. Touretzky, M.C. Mozer & M.E. Hesselmo (eds.), Advances in Neural Information Processing Systems, Cambridge, MA, USA, MIT Press, Vol. 8, str. 190–196.

  241. Somol, P., Pudil, P., Ferri, F.J., Kittler, J. (2000) Fast branch & bound algorithm in feature selection. Invited paper for the 4th World Multiconference on Systemics, Cybernetics and Informatics, Orlando, FL, USA, IIIS, str. 646–651.

  242. Specht, D.F. (1990) Probabilistic neural networks. Neural Networks, Vol. 3, No. 1, str. 109–118.

  243. Specht, D.F. (1992) Enhancements to probabilistic neural networks. Interna­tional Joint Conference on Neural Networks (IJCNN), Vol. 1, str. 761–768.

  244. STANFILL, C., WALTZ, D. (1986) Toward memory-based reasoning. Communi­cation of the ACM, Vol. 29, No. 12, str. 1213–1228.

  245. STRZECHA, K. (2001) Image segmentation algorithm based on statistical pattern recognition methods. 6th International IEEE Conference on Experience of Designing and Application of CAD Systems in Microelectronics (CADSM), Lviv–Slavske, Ukraine, str. 200–201.

  246. Swonger, C.W. (1972) Sample set condensation for a condensed nearest neighbor decision rule for pattern recognition. W: S. Watanabe (ed.), Frontiers of Pattern Recognition, Academic Press, New York City, NY, USA, str. 511–519.

  247. Tadeusiewicz, R. (1993) Sieci neuronowe. Akademicka Oficyna Wydaw­nicza, Warszawa.

  248. Tadeusiewicz, R., Flasiński, M. (1991) Rozpoznawanie obrazów. PWN, Warszawa.

  249. Tan, T.-T., Davis, L., Thurimella, R. (1999) One-dimensional index for nearest neighbor search. European Workshop on Content-Based Multimedia Indexing, Toulouse, France.

  250. Tomek, I. (1976a) A generalization of the k-NN rule. IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-6, No. 2, str. 121–126.

  251. TOMEK, I. (1976b) An experiment with the edited nearest-neighbour rule. IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-6, No. 6, str. 448–452.

  252. TOMEK, I. (1976c) Two modifications of CNN. IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-6, No. 11, str. 769–772.

  253. Toussaint, G.T. (1994) A counterexample to Tomek's consistency theorem for a condensed nearest neighbor decision rule. Pattern Recognition Letters, Vol. 15, No. 8, str. 797–801.

  254. Traina Jr., C., Traina, A.J.M., SEEGER, B., Faloutsos, Ch. (2000) Slim-trees: High performance metric trees minimizing overlap between nodes. International Conference on Extending Database Technology, Konstanz, Germa­ny, str. 51–65.

  255. Traina Jr., C., Traina, A.J.M., Wu, L., Faloutsos, Ch. (2000) Fast feature selection using the fractal dimension. 15th Brazilian Symposium on Databases (SBBD), João Pessoa, Brazil.

  256. Tumer, K., GHosh, J. (1996) Error correlation and error reduction in ensemble classifiers. Connection Science, Vol. 8, No. 3–4, str. 385–404.

  257. Uhlmann, J.K. (1991) Satisfying general proximity/similarity queries with metric trees. Information Processing Letters, Vol. 40, Nr. 4, str. 175–179.

  258. VAFAIE, H., DE JONG, K. (1992) Genetic algorithms as a tool for feature selection in machine learning. 4th International Conference on Tools with Artificial Intelligence, Arlington, VA, USA, str. 200–204.

  259. Webb, G., Pazzani, M. (1998) Adjusted probability naive Bayesian induction. 11th Australian Joint Conference on Artificial Intelligence. Brisbane, QLD, Australia, str. 285–295.

  260. Weber, R., Schek, H.-J., Blott, S. (1998) A quantative analysis and perfor­mance study for similarity search methods in high-dimensional spaces. 24th International Conference on Very Large Data Bases (VLDB), New York City, NY, USA, str. 194–205.

  261. Wettschereck, D., Dietterich, T.G. (1994) Locally adaptive nearest neighbor algorithms. Advances in Neural Information Processing Systems, Vol. 6, str. 184–191.

  262. Wettschereck, D., Dietterich, T.G. (1995) An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms. Machine Learning, Vol. 19, No. 1, str. 5–27.

  263. White, D.A., Jain, R. (1996) Similarity indexing with the SS-tree. 12th Interna­tional Conference on Data Engineering (ICDE), New Orleans, LA, USA, str. 194–205.

  264. Widrow, B., Hoff JR., M.E. (1960) Adaptive switching circuits. 1960 IRE WESCON Convention Record, str. 96–104. W: Anderson & Rosenfeld, eds. (1988), Neurocomputing — Foundations of Research.

  265. WILSON, D.L. (1972) Asymptotic properties of nearest neighbour rules using edited data. IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-2, No. 3, str. 408–421.

  266. WILSON, D.R. (1997) Advances in Instance-Based Learning Algorithms. PhD thesis, Computer Science Dept., Brigham Young University, Provo, UT, USA.

  267. Wilson, D.R., Martinez, T.R. (1997a) Improved heterogeneous distance functions. Journal of Artificial Intelligence Research, Vol. 6, No. 1, str. 1–34.

  268. WILSON, D.R., Martinez, T.R. (1997b) Instance pruning techniques. 14th International Conference on Machine Learning, Nashville, TN, USA, str. 403–411.

  269. WILSON, D.R., MARTINEZ, T.R. (2000) Reduction techniques for instance-based learning algorithms. Machine Learning, Vol. 38, No. 3, str. 257–286.

  270. Windeatt, T., Ghaderi, R. (2000) Multi-class learning and error-correcting code sensitivity. Electronic Letters, Vol. 36, No. 19, str. 1630–1632.

  271. Wnek, J., Michalski, R.S. (1994) Discovering representation space trans­formations for learning concept descriptions combining DNF and M-of-N rules. Working Notes of the ML'94 Workshop on Constructive Induction and Change of Representation, New Brunswick, NJ, USA, str. 61–68.

  272. Wolpert, D.H. (1992) Stacked generalization. Neural Networks, Vol. 5, No. 2, str. 241–259.

  273. WOLPERT, D.H. (1993) On overfitting avoidance as bias. Technical Report SFI TR 92-03-5001, The Santa Fe Institute, Santa Fe, NM, USA.

  274. Woods, K., Kegelmeyer, W.P., Bowyer, K. (1997) Combination of multiple classifiers using local accuracy estimates. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 4, str. 405–410.

  275. Wu, Y., Ianakiev, K.G., Govindaraju, V. (2002) Improved k-nearest neighbor classification. Pattern Recognition, Vol. 35, str. 2311–2318.

  276. Yang, J., Honavar, V. (1997) Feature subset selection using a genetic algorithm. 2nd International Conference on Genetic Programming, str. 380–385.

  277. YAO, A.C., YAO, F.F. (1985) A general approach to d-dimensional geometric queries. 17th Annual ACM Symposium on Theory of Computing, Providence, RI, USA, str. 163–168.

  278. Yao, X., Liu, Y. (1998) Making use of population information in evolutionary artificial neural networks. IEEE Transactions on Systems, Man and Cybernetics, Vol. SMC-28, No. 3, str. 417–425.

  279. Yianilos, P.N. (1993) Data structures and algorithms for nearest neighbor search in general metric spaces. 4th ACM-SIAM Symposium on Discrete Algorithms (SODA), Austin, TX, USA, str. 311–321.

  280. Yu, B., Yuan, B. (1993) A more efficient branch and bound algorithm for feature selection. Pattern Recognition, Vol. 26, No. 6, str. 883–889.

  281. Xie, Z., Wynne, H., Liu, Z., LEE, M.-L. (2002) SNNB: A selective neighbor­hood based naïve Bayes for lazy learning. Advances in Knowledge Discovery and Data Mining, 6th Pacific-Asia Conference (PAKDD2002), Taipei, Taiwan, str. 104–114.

  282. Xu, L., Krzyżak, A., Suen, C.Y. (1992) Methods for combining multiple classifiers and their application to handwriting recognition. IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-22, No. 3, str. 418–435.

  283. Zavrel, J. (1997) An empirical re-examination of weighted voting for k-NN. 7th Belgian-Dutch Conference on Machine Learning (BENELEARN-97), Tilburg, Holland.

  284. Zenobi, G., Cunningham, P. (2001) Using diversity in preparing ensembles of classifiers based on different feature subsets to minimize generalization error. 12th European Conference on Machine Learning (ECML), Freiburg, Germany, str. 567–587.

  285. Zenobi, G., Cunningham, P. (2002) An approach to aggregating ensem­bles of lazy learners that supports explanation. 6th European Conference on Case-Based Reasoning, Aberdeen, Scotland, str. 436–447.

  286. ZHANG, B.-T. (1994) Accelerated learning by active example selection. Interna­tional Journal of Neural Systems, Vol. 5, No. 1, str. 67–75.

  287. Zhang, J. (1992) Selecting typical instances in instance-based learning. 9th International Conference on Machine Learning, Aberdeen, Scotland, str. 470–479.

  288. Zhang, J., Yim, Y.-S., Yang, J. (1997) Intelligent selection of instances for prediction functions in lazy learning systems. Artificial Intelligence Review, Vol. 11, No. 1–5, str. 175–191.

  289. Zheng, Z. (1998) Naive Bayesian classifier committees. 10th European Con­ference on Machine Learning, Chemnitz, Germany, str. 196–207.

1 S. Deorowicz, korespondencja prywatna, 2002.

2 Efektowny przykład wskazujący na słabość tejże metody, niezależnie od jej odmiany, podaje Jankowski (1999, str. 140): „gdy założyć, że wartości brakujące zastąpi się najczęściej występującą wartością danej cechy, można spodziewać się, że brakująca informacja o wzroście niemowlęcia wyniesie ok. 165 cm”.

3 Drzewa dokonujące rozszczepienia w oparciu o wiele cech (ang. multivariate trees) pokonują
ten problem, ale za cenę wydłużonego czasu uczenia i mniejszej zrozumiałości wynikowego zbioru reguł dla człowieka.

4 A. Jóźwik, korespondencja prywatna, 2000.

5 M. Kubat, korespondencja prywatna, 2001.

6 Katastrofa elektrowni w Three-Mile Island — i, być może, w Czernobylu — nie miałaby miejsca, gdyby, mówiąc krótko, ludzie zaufali maszynom (Michie i in., 1994, str. 15).

7 W przypadku Rastrigina i Erensteina „zawinił”, jak można sądzić, język, w którym artykuł został napisany (rosyjski).

8 L. Kuncheva, korespondencja prywatna, 2002.

9 Uhlmann używa terminu „metric tree” (Uhlmann, 1991).

10 W tej klasie algorytmów zwykle nie rozróżnia się pomiędzy kosztem pamięciowym a czasowym wstępnej obróbki. Są one przeważnie zbliżone.

11 Podobną obserwację potwierdza dr A. Jóźwik (korespondencja prywatna, 2000–2002).

12 Tłumaczenie terminu pochodzi od autora rozprawy.

13 W praktyce dość częste mogą być jednak wyjątki, np. wg cytowanej pracy Skalaka, klasyfikator „drugi najbliższy sąsiad” jest znacząco lepszy od klasyfikatora „pierwszy najbliższy sąsiad” na zbiorze Iris.

14 Końcowe „N” oznacza „Neighborhood” (sąsiedztwo).

15 Remisy często rozstrzyga się na korzyść najliczniejszej klasy. Na zbiorze Ferrites nie miało to jednak
znaczenia, gdyż klasy w tym zbiorze ułożone są od największej liczebnie do najmniejszej.

16 Nie jest nawet konieczne pełne sortowanie.



Pobieranie 6.5 Mb.

Share with your friends:
1   ...   46   47   48   49   50   51   52   53   54




©operacji.org 2020
wyślij wiadomość

    Strona główna