1 Establishing trust
Artificial intelligence is studying machines that exhibit traits associated with a human mind, such as perception, learning, reasoning, planning, and problem-solving. Although it had a prior history under different names (e.g., cybernetics and automata studies), we may consider the genesis of the field of artificial intelligence to be the Dartmouth Summer Research Project on Artificial Intelligence in the summer of 1956. Soon after that, the area split into two camps: one focused on symbolic systems, problem-solving, psychology, performance, and serial architectures, and the other focused on continuous systems, pattern recognition, neuroscience, learning, and parallel architectures.[1] This book is primarily focused on the second of the two partitions of artificial intelligence, namely machine learning.
The term machine learning was popularized in Arthur Samuel's description of his computer system that could play checkers,[2] not because it was explicitly programmed to do so, but because it learned from previous games' experiences. In general, machine learning is the study of algorithms that take data and information from observations and interactions as input and generalize from that specific input to exhibit traits of human thought. Generalization is a process by which particular examples are abstracted to more encompassing concepts or decision rules.