ÇöÀçÀ§Ä¡ : Home > ÄÄÇ»ÅÍ/ÀÎÅÍ³Ý > ÄÄÇ»ÅÍ °øÇÐ

 
Machine Learning (Hardcover)
    ¡¤ ÁöÀºÀÌ | ¿Å±äÀÌ:Tom M. Mitchell
    ¡¤ ÃâÆÇ»ç:McGraw-Hill Science Engineering
    ¡¤ ÃâÆdz⵵:1997
    ¡¤ Ã¥»óÅÂ:ÀüüÀûÀ¸·Î °£°£ÀÌ 170ÆäÀÌÁö Á¤µµÀÇ ¿¬ÇÊ, Çü±¤Ææ ¹ØÁÙ ¿Ü¿£ »ó±Þ / ¾çÀ庻 / 432ÂÊ / 165*240mm / ¾ð¾î English / ISBN 9780070428072(0070428077)
    ¡¤ ISBN:9780070428072
    ¡¤ ½ÃÁß°¡°Ý : ¿ø
    ¡¤ ÆǸŰ¡°Ý : ¿ø
    ¡¤ Æ÷ ÀÎ Æ® : Á¡
    ¡¤ ¼ö ·® : °³

This book covers the field of machine learning, which is the study of algorithms that allow computer programs to automatically improve through experience. The book is intended to support upper level undergraduate and introductory level graduate courses in machine learning.


Preface xv (1)
 Acknowledgments xvi
1 Introduction 1 (19)
1.1 Well-Posed Learning Problems 2 (3)
1.2 Designing a Learning System 5 (9)
1.2.1 Choosing the Training Experience 5 (2)
1.2.2 Choosing the Target Function 7 (1)
1.2.3 Choosing a Representation for the 8 (1)
Target Function
1.2.4 Choosing a Function Approximation 9 (2)
Algorithm
1.2.5 The Final Design 11 (3)
1.3 Perspectives and Issues in Machine 14 (2)
Learning
1.3.1 Issues in Machine Learning 15 (1)
1.4 How to Read This Book 16 (1)
1.5 Summary and Further Reading 17 (1)
Exercises 18 (1)
References 19 (1)
2 Concept Learning and the 20 (32)
General-to-Specific Ordering
2.1 Introduction 20 (1)
2.2 A Concept Learning Task 21 (2)
2.2.1 Notation 22 (1)
2.2.2 The Inductive Learning Hypothesis 23 (1)
2.3 Concept Learning as Search 23 (3)
2.3.1 General-to-Specific Ordering of 24 (2)
Hypotheses
2.4 Find-S: Finding a Maximally Specific 26 (3)
Hypothesis
2.5 Version Spaces and the 29 (8)
CANDIDATE-ELIMINATION Algorithm
2.5.1 Representation 29 (1)
2.5.2 The LIST-THEN-ELIMINATE Algorithm 30 (1)
2.5.3 A More Compact Representation for 30 (2)
Version Spaces
2.5.4 CANDIDATE-ELIMANATION Learning 32 (1)
Algorithm
2.5.5 An Illustrative Example 33 (4)
2.6 Remarks on Version Spaces and 37 (2)
CANDIDATE-ELIMINATION
2.6.1 Will the CANDIDATE-ELIMINATION 37 (1)
Algorithm Converge to the Correct
Hypothesis?
2.6.2 What Training Example Should the 37 (1)
Learner Request Next?
2.6.3 How Can Partially Learned Concepts 38 (1)
Be Used?
2.7 Inductive Bias 39 (6)
2.7.1 A Biased Hypothesis Space 40 (1)
2.7.2 An Unbiased Learner 40 (2)
2.7.3 The Futility of Bias-Free Learning 42 (3)
2.8 Summary and Further Reading 45 (2)
Exercises 47 (3)
References 50 (2)
3 Decision Tree Learning 52 (29)
3.1 Introduction 52 (1)
3.2 Decision Tree Representation 52 (2)
3.3 Appropriate Problems for Decision Tree 54 (1)
Learning
3.4 The Basic Decision Tree Learning 55 (5)
Algorithm
3.4.1 Which Attribute Is the Best 55 (4)
Classifier?
3.4.2 An Illustrative Example 59 (1)
3.5 Hypothesis Space Search in Decision 60 (3)
Tree Learning
3.6 Inductive Bias in Decision Tree 63 (3)
Learning
3.6.1 Restriction Biases and Preference 63 (2)
Biases
3.6.2 Why Prefer Short Hypotheses? 65 (1)
3.7 Issues in Decision Tree Learning 66 (10)
3.7.1 Avoiding Overfitting the Data 66 (6)
3.7.2 Incorporating Continuous-Valued 72 (1)
Attributes
3.7.3 Alternative Measures for Selecting 73 (2)
Attributes
3.7.4 Handling Training Examples with 75 (1)
Missing Attribute Values
3.7.5 Handling Attributes with Differing 75 (1)
Costs
3.8 Summary and Further Reading 76 (1)
Exercises 77 (1)
References 78 (3)
4 Artificial Neural Networks 81 (47)
4.1 Introduction 81 (1)
4.1.1 Biological Motivation 82 (1)
4.2 Neural Network Representations 82 (1)
4.3 Appropriate Problems for Neural 83 (3)
Network Learning
4.4 Perceptrons 86 (9)
4.4.1 Representational Power of 86 (2)
Perceptrons
4.4.2 The Perceptron Training Rule 88 (1)
4.4.3 Gradient Descent and the Delta Rule 89 (5)
4.4.4 Remarks 94 (1)
4.5 Multilayer Networks and the 95 (9)
BACKPROPAGATION Algorithm
4.5.1 A Differentiable Threshold Unit 95 (2)
4.5.2 The BACKPROPAGATION Algorithm 97 (4)
4.5.3 Derivation of the BACKPROPAGATION 101(3)
Rule
4.6 Remarks on the BACKPROPAGATION 104(8)
Algorithm
4.6.1 Convergence and Local Minima 104(1)
4.6.2 Representational Power of 105(1)
Freedforward Nerworks
4.6.3 Hypothesis Space Search and 106(1)
Inductive Bias
4.6.4 Hidden Layer Representations 106(2)
4.6.5 Generalization, Overfitting, and 108(4)
Stopping Criterion
4.7 An Illustrative Example: Face 112(5)
Recognition
4.7.1 The Task 112(1)
4.7.2 Design Choices 113(3)
4.7.3 Learned Hidden Representations 116(1)
4.8 Advanced Topics in Artificial Neural 117(5)
Networks
4.8.1 Alternative Error Functions 117(2)
4.8.2 Alternative Error Minimization 119(1)
Procedures
4.8.3 Recurrent Networks 119(2)
4.8.4 Dynamically Modifying Network 121(1)
Structure
4.9 Summary and Further Reading 122(2)
Exercises 124(2)
References 126(2)
5 Evaluating Hypotheses 128(26)
5.1 Motivation 128(1)
5.2 Estimating Hypothesis Accuracy 129(3)
5.2.1 Sample Error and True Error 130(1)
5.2.2 Confidence Intervals for 131(1)
Discrete-Valued Hypotheses
5.3 Basics of Sampling Theory 132(10)
5.3.1 Error Estimation and Estimating 133(2)
Binomial Proportions
5.3.2 The Binomial Distribution 135(1)
5.3.3 Mean and Variance 136(1)
5.3.4 Estimators, Bias, and Variance 137(1)
5.3.5 Confidence Intervals 138(3)
5.3.6 Two-Sided and One-Sided Bounds 141(1)
5.4 A General Approach for Deriving 142(1)
Confidence Intervals
5.4.1 Central Limit Theorem 142(1)
5.5 Difference in Error of Two Hypotheses 143(2)
5.5.1 Hypothesis Testing 144(1)
5.6 Comparing Learning Algorithms 145(5)
5.6.1 Paired t Tests 148(1)
5.6.2 Practical Considerations 149(1)
5.7 Summary and Further Reading 150(2)
Exercises 152(1)
References 152(2)
6 Bayesian Learning 154(47)
6.1 Introduction 154(1)
6.2 Bayes Theorem 156(2)
6.2.1 An Example 157(1)
6.3 Bayes Theorem and Concept Learning 158(6)
6.3.1 Brute-Force Bayes Concept Learning 159(3)
6.3.2 MAP Hypotheses and Consistent 162(2)
Learners
6.4 Maximum Likelihood and Least-Squared 164(3)
Error Hypotheses
6.5 Maximum Likelihood Hypotheses for 167(4)
Predicting Probabilities
6.5.1 Gradient Search to Maximize 170(1)
Likelihood in a Neural Net
6.6 Minimum Description Length Principle 171(3)
6.7 Bayes Optimal Classifier 174(2)
6.8 Gibbs Algorithm 176(1)
6.9 Naive Bayes Classifier 177(3)
6.9.1 An Illustrative Example 178(2)
6.10 An Example: Learning to Classify Text 180(4)
6.10.1 Experimental Results 182(2)
6.11 Bayesian Belief Networks 184(7)
6.11.1 Conditional Independence 185(1)
6.11.2 Representation 186(1)
6.11.3 Inference 187(1)
6.11.4 Learning Bayesian Belief Networks 188(1)
6.11.5 Gradient Ascent Training of 188(2)
Bayesian Networks
6.11.6 Learning the Structure of 190(1)
Bayesian Networks
6.12 The EM Algorithm 191(6)
6.12.1 Estimating Means of k Gaussians 191(3)
6.12.2 General Statement of EM Algorithm 194(1)
6.12.3 Derivation of the k Means 195(2)
Algorithm
6.13 Summary and Further Reading 197(1)
Exercises 198(1)
References 199(2)
7 Computational Learning Theory 201(29)
7.1 Introduction 201(2)
7.2 Probably Learning and Approximately 203(4)
Correct Hypothesis
7.2.1 The Problem Setting 203(1)
7.2.2 Error of a Hypothesis 204(1)
7.2.3 PAC Learnability 205(2)
7.3 Sample Complexity for Finite 207(7)
Hypothesis Spaces
7.3.1 Agnostic Learning and Inconsistent 210(1)
Hypotheses
7.3.2 Conjunctions of Boolean Literals 211(1)
Are PAC-Learnable
7.3.3 PAC-Learnability of Other Concept 212(2)
Classes
7.4 Sample Complexity for Infinite 214(6)
Hypothesis Spaces
7.4.1 Shattering a Set of Instances 214(1)
7.4.2 The Vapnik-Chervonenkis Dimension 215(2)
7.4.3 Sample Complexity and the VC 217(1)
Dimension
7.4.4 VC Dimension for Neural Networks 218(2)
7.5 The Mistake Bound Model of Learning 220(5)
7.5.1 Mistake Bound for the FIND-S 220(1)
Algorithm
7.5.2 Mistake Bound for the HALVING 221(1)
Algorithm
7.5.3 Optimal Mistake Bounds 222(1)
7.5.4 WEIGHTED-MAJORITY Algorithm 223(2)
7.6 Summary and Further Reading 225(2)
Exercises 227(2)
References 229(1)
8 Instance-Based Learning 230(19)
8.1 Introduction 230(1)
8.2 k-NEAREST NEIGHBOR LEARNING 231(5)
8.2.1 Distance-Weighted NEAREST NEIGHBOR 233(1)
Algorithm
8.2.2 Remarks on k-NEAREST NEIGHBOR 234(2)
Algorithm
8.2.3 A Note on Terminology 236(1)
8.3 Locally Weighted Regression 236(2)
8.3.1 Locally Weighted Linear Regression 237(1)
8.3.2 Remarks on Locally Weighted 238(1)
Regression
8.4 Radial Basis Functions 238(2)
8.5 Case-Based Reasoning 240(4)
8.6 Remarks on Lazy and Eager Learning 244(1)
8.7 Summary and Further Reading 245(2)
Exercises 247(1)
References 247(2)
9 Genetic Algorithms 249(25)
9.1 Motivation 249(1)
9.2 Genetic Algorithms 250(6)
9.2.1 Representing Hypotheses 252(1)
9.2.2 Genetic Operators 253(2)
9.2.3 Fitness Function and Selection 255(1)
9.3 An Illustrative Example 256(3)
9.3.1 Extensions 258(1)
9.4 Hypothesis Space Search 259(3)
9.4.1 Population Evolution and the 260(2)
Schema Theorem
9.5 Genetic Programming 262(4)
9.5.1 Representing Programs 262(1)
9.5.2 Illustrative Example 263(2)
9.5.3 Remarks on Genetic Programming 265(1)
9.6 Models of Evolution and Learning 266(2)
9.6.1 Lamarckian Evolution 266(1)
9.6.2 Baldwin Effect 267(1)
9.7 Parallelizing Genetic Algorithms 268(1)
9.8 Summary and Further Reading 268(2)
Exercises 270(1)
References 270(4)
10 Learning Sets of Rules 274(33)
10.1 Introduction 274(1)
10.2 Sequential Covering Algorithms 275(5)
10.2.1 General to Specific Beam Search 277(2)
10.2.2 Variations 279(1)
10.3 Learning Rule Sets: Summary 280(3)
10.4 Learning First-Order Rules 283(2)
10.4.1 First-Order Horn Clauses 283(1)
10.4.2 Terminology 284(1)
10.5 Learning Sets of First-Order Rules: 285(6)
FOIL
10.5.1 Generating Candidate 287(1)
Specializations if FOIL
10.5.2 Guiding the Search in FOIL 288(2)
10.5.3 Learning Recursive Rule Sets 290(1)
10.5.4 Summary of FOIL 290(1)
10.6 Induction as Inverted Deduction 291(2)
10.7 Inverting Resolution 293(8)
10.7.1 First-Order Resolution 296(1)
10.7.2 Inverting Resolution: First-Order 297(1)
Case
10.7.3 Summary of Inverse Resolution 298(1)
10.7.4 Generalization, 0-Subsumption, 299(1)
and Entailment
10.7.5 PROGOL 300(1)
10.8 Summary and Further Reading 301(2)
Exercises 303(1)
References 304(3)
11 Analytical Learning 307(27)
11.1 Introduction 307(5)
11.1.1 Inductive and Analytical Learning 310(2)
Problems
11.2 Learning with Perfect Domain 312(7)
Theories: PROLOG-EBG
11.2.1 An Illustrative Trace 313(6)
11.3 Remarks on Explanation-Based Learning 319(6)
11.3.1 Discovering New Features 320(1)
11.3.2 Deductive Learning 321(1)
11.3.3 Inductive Bias in 322(1)
Explanation-Based Learning
11.3.4 Knowledge Level Learning 323(2)
11.4 Explanation-Based Learning of Search 325(3)
Control Knowledge
11.5 Summary and Further Reading 328(2)
Exercises 330(1)
References 331(3)
12 Combining Inductive and Analytical 334(33)
Learning
12.1 Motivation 334(3)
12.2 Inductive-Analytical Approaches to 337(3)
Learning
12.2.1 The Learning Problem 337(2)
12.2.2 Hypothesis Space Search 339(1)
12.3 Using Prior Knowledge to Initialize 340(6)
the Hypothesis
12.3.1 The KBANN Algorithm 340(1)
12.3.2 An Illustrative Example 341(3)
12.3.3 Remarks 344(2)
12.4 Using Prior Knowledge to Alter the 346(11)
Search Objective
12.4.1 The TANGENTPROP Algorithm 347(2)
12.4.2 An Illustrative Example 349(1)
12.4.3 Remarks 350(1)
12.4.4 The EBNN Algorithm 351(4)
12.4.5 Remarks 355(2)
12.5 Using Prior Knowledge to Augment 357(4)
Search Operators
12.5.1 The FOCL Algorithm 357(3)
12.5.2 Remarks 360(1)
12.6 State of the Art 361(1)
12.7 Summary and Further Reading 362(1)
Exercises 363(1)
References 364(3)
13 Reinforcement Learning 367(24)
13.1 Introduction 367(3)
13.2 The Learning Task 370(3)
13.3 Q Learning 373(8)
13.3.1 The Q Function 374(1)
13.3.2 An Algorithm for Learning Q 374(2)
13.3.3 An Illustrative Example 376(1)
13.3.4 Convergence 377(2)
13.3.5 Experimentation Strategies 379(1)
13.3.6 Updating Sequence 379(2)
13.4 Nondeterministic Rewards and Actions 381(2)
13.5 Temporal Difference Learning 383(1)
13.6 Generalizing from Examples 384(1)
13.7 Relationship to Dynamic Programming 385(1)
13.8 Summary and Further Reading 386(2)
Exercises 388(1)
References 388(3)
 Appendix Notation 391(3)
 Indexes
 Author Index 394(6)
 Subject Index 400



¹øÈ£ Á¦¸ñ ÀÛ¼ºÀÚ ÀÛ¼ºÀÏ ´äº¯
ÀÌ »óÇ°¿¡ ´ëÇÑ Áú¹®ÀÌ ¾ÆÁ÷ ¾ø½À´Ï´Ù.
±Ã±ÝÇϽŠ»çÇ×Àº ÀÌ°÷¿¡ Áú¹®ÇÏ¿© ÁֽʽÿÀ.
 
* ÀÌ »óÇ°¿¡ ´ëÇÑ ±Ã±ÝÇÑ »çÇ×ÀÌ ÀÖÀ¸½Å ºÐÀº Áú¹®ÇØ ÁֽʽÿÀ.
ȸ»ç¼Ò°³ | ¼­ºñ½ºÀÌ¿ë¾à°ü | °³ÀÎÁ¤º¸ Ãë±Þ¹æħ
¼­¿ï½Ã °ü¾Ç±¸ ½Å¿øµ¿ 1580-18 2Ãþ / ÀüÈ­ : 010-4004-14393 / Æѽº : 02-811-1256 / ¿î¿µÀÚ : ´Þ¸¶ / °³ÀÎÁ¤º¸°ü¸®Ã¥ÀÓÀÚ : ÀÓ¿µÅÃ
»ç¾÷ÀÚ µî·Ï¹øÈ£ : 108-91-53191 / ´ëÇ¥ : ÀÓ¿µÅà / Åë½ÅÆǸž÷½Å°í¹øÈ£ : Á¦ OO±¸ - 123È£
Copyright © 2009 ´Þ¸¶¼­Á¡. All Rights Reserved.