JNTUK B.Tech 4-1(R19) Machine Learning Material Download: JNTU Kakinda B.Tech R19 Regulation 4th Year 1st Semester Machine Learning Syllabus and Material is now available for download. The candidates who are looking for JNTUK B.Tech R19 4-1 Materials can download Here

UNIT I

Introduction: Definition of learning systems, Goals and applications of machine learning, Aspects of developing a learning system: training data, concept representation, function approximation.

Inductive Classification: The concept learning task, Concept learning as search through a hypothesis space, General-to-specific ordering of hypotheses, Finding maximally specific hypotheses, Version spaces and the candidate elimination algorithm, Learning conjunctive concepts, The importance of inductive bias.

UNIT II

Decision Tree Learning: Representing concepts as decision trees, Recursive induction of decision trees, Picking the best splitting attribute: entropy and information gain, Searching for simple trees and computational complexity, Occam's razor, Overfitting, noisy data, and pruning.

Decision Tree Learning: Representing concepts as decision trees, Recursive induction of decision trees, Picking the best splitting attribute: entropy and information gain, Searching for simple trees and computational complexity, Occam's razor, Overfitting, noisy data, and pruning.

Experimental Evaluation of Learning Algorithms: Measuring the accuracy of learned hypotheses. Comparing learning algorithms: cross-validation, learning curves, and statistical hypothesis testing.

UNIT III

Computational Learning Theory: Models of learnability: learning in the limit; probably approximately correct (PAC) learning. Sample complexity for infinite hypothesis spaces, Vapnik-Chervonenkis dimension.

Rule Learning: Propositional and First-Order, Translating decision trees into rules, Heuristic rule induction using separate and conquer and information gain, First-order Horn-clause induction (Inductive Logic Programming) and Foil, Learning recursive rules, Inverse resolution, Golem, and Progol.

Computational Learning Theory: Models of learnability: learning in the limit; probably approximately correct (PAC) learning. Sample complexity for infinite hypothesis spaces, Vapnik-Chervonenkis dimension.

Rule Learning: Propositional and First-Order, Translating decision trees into rules, Heuristic rule induction using separate and conquer and information gain, First-order Horn-clause induction (Inductive Logic Programming) and Foil, Learning recursive rules, Inverse resolution, Golem, and Progol.

UNIT IV

Artificial Neural Networks: Neurons and biological motivation, Linear threshold units. Perceptrons: representational limitation and gradient descent training, Multilayer networks and backpropagation, Hidden layers and constructing intermediate, distributed representations. Overfitting, learning network structure, recurrent networks.

Artificial Neural Networks: Neurons and biological motivation, Linear threshold units. Perceptrons: representational limitation and gradient descent training, Multilayer networks and backpropagation, Hidden layers and constructing intermediate, distributed representations. Overfitting, learning network structure, recurrent networks.

Support Vector Machines: Maximum margin linear separators. Quadractic programming solution to finding maximum margin separators. Kernels for learning non-linear functions

UNIT V

Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm. Parameter smoothing. Generative vs. discriminative training. Logisitic regression. Bayes nets and Markov nets for representing dependencies.

Instance-Based Learning: Constructing explicit generalizations versus comparing to past specific examples. k-Nearest-neighbor algorithm. Case-based learning

Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm. Parameter smoothing. Generative vs. discriminative training. Logisitic regression. Bayes nets and Markov nets for representing dependencies.

Instance-Based Learning: Constructing explicit generalizations versus comparing to past specific examples. k-Nearest-neighbor algorithm. Case-based learning

UNIT-V Material Download Here

E-Text Books Download From below

1) T.M. Mitchell, “Machine Learning”, McGraw-Hill, 1997.

2) Machine Learning, Saikat Dutt, Subramanian Chandramouli, Amit Kumar Das, Pearson, 2019.

3) Ethern Alpaydin, “Introduction to Machine Learning”, MIT Press, 2004.

4) Stephen Marsland, “Machine Learning -An Algorithmic Perspective”, Second Edition, Chapman and Hall/CRC Machine Learning and Pattern Recognition Series, 2014.

## 0Comments:

## Post a Comment

Note: only a member of this blog may post a comment.