Sale!

Test Bank for Artificial Intelligence: Foundations of Computational Agents (2nd Edition) by David L. Poole

By:  
  • ISBN-10:  110719539X / ISBN-13:  9781107195394
  • Ebook Details

    • Edition: 2nd edition
    • Format: Downloadable ZIP Fille
    • Resource Type : Testbank
    • Publication: 2017
    • Duration: Unlimited downloads
    • Delivery: Instant Download
     

    $35.00 $30.00

    SKU: 4e4c14c9c6a2 Category:

    Table of content:

    Title
    Copyright
    Contents
    Figures
    Preface
    Part I Agents in the World: What are Agents and How Can They be Built?
    Chapter 1: Artificial Intelligence and Agents
    1.1 What is Artificial Intelligence?
    1.1.1 Artificial and Natural Intelligence
    1.2 A Brief History of Artificial Intelligence
    1.2.1 Relationship to Other Disciplines
    1.3 Agents Situated in Environments
    1.4 Designing Agents
    1.4.1 Design Time, Offline and Online Computation
    1.4.2 Tasks
    1.4.3 Defining a Solution
    1.4.4 Representations
    1.5.1 Modularity
    1.5 Agent Design Space
    1.5.2 Planning Horizon
    1.5.3 Representation
    1.5.4 Computational Limits
    1.5.5 Learning
    1.5.6 Uncertainty
    1.5.7 Preference
    1.5.8 Number of Agents
    1.5.9 Interaction
    1.5.10 Interaction of the Dimensions
    1.6 Prototypical Applications
    1.6.1 An Autonomous Delivery Robot
    1.6.2 A Diagnostic Assistant
    1.6.3 An Intelligent Tutoring System
    1.6.4 A Trading Agent
    1.6.5 Smart House
    1.7 Overview of the Book
    1.8 Review
    1.9 References and Further Reading
    1.10 Exercises
    Chapter 2: Agent Architectures and Hierarchical Control
    2.1 Agents
    2.2 Agent Systems
    2.3 Hierarchical Control
    2.4 Acting with Reasoning
    2.4.1 Agents Modeling the World
    2.4.2 Knowledge and Acting
    2.4.3 Design Time and Offline Computation
    2.4.4 Online Computation
    2.5 Review
    2.6 References and Further Reading
    2.7 Exercises
    Part II Reasoning, Planning and Learning with Certainty
    Chapter 3: Searching for Solutions
    3.1 Problem Solving as Search
    3.2 State Spaces
    3.3 Graph Searching
    3.3.1 Formalizing Graph Searching
    3.4 A Generic Searching Algorithm
    3.5 Uninformed Search Strategies
    3.5.1 Breadth-First Search
    3.5.2 Depth-First Search
    3.5.3 Iterative Deepening
    3.5.4 Lowest-Cost-First Search
    3.6 Heuristic Search
    3.6.1 A∗ Search
    3.6.2 Designing a Heuristic Function
    3.7 Pruning the Search Space
    3.7.1 Cycle Pruning
    3.7.2 Multiple-Path Pruning
    3.7.3 Summary of Search Strategies
    3.8 More Sophisticated Search
    3.8.1 Branch and Bound
    3.8.2 Direction of Search
    3.8.3 Dynamic Programming
    3.9 Review
    3.10 References and Further Reading
    3.11 Exercises
    Chapter 4: Reasoning with Constraints
    4.1 Possible Worlds, Variables, and Constraints
    4.1.1 Variables and Worlds
    4.1.2 Constraints
    4.1.3 Constraint Satisfaction Problems
    4.2 Generate-and-Test Algorithms
    4.3 Solving CSPs Using Search
    4.5 Domain Splitting
    4.6 Variable Elimination
    4.7 Local Search
    4.7.1 Iterative Best Improvement
    4.7.2 Randomized Algorithms
    4.7.3 Local Search Variants
    4.7.4 Evaluating Randomized Algorithms
    4.7.5 Random Restart
    4.8 Population-Based Methods
    4.9 Optimization
    4.9.1 Systematic Methods for Optimization
    4.9.2 Local Search for Optimization
    4.10 Review
    4.11 References and Further Reading
    4.12 Exercises
    Chapter 5: Propositions and Inference
    5.1 Propositions
    5.1.1 Syntax of Propositional Calculus
    5.1.2 Semantics of the Propositional Calculus
    5.2 Propositional Constraints
    5.2.1 Clausal Form for Consistency Algorithms
    5.2.2 Exploiting Propositional Structure in Local Search
    5.3 Propositional Definite Clauses
    5.3.1 Questions and Answers
    5.3.2 Proofs
    5.4 Knowledge Representation Issues
    5.4.1 Background Knowledge and Observations
    5.4.2 Querying the User
    5.4.3 Knowledge-Level Explanation
    5.4.4 Knowledge-Level Debugging
    5.5 Proving by Contradiction
    5.5.1 Horn Clauses
    5.5.2 Assumables and Conflicts
    5.5.3 Consistency-Based Diagnosis
    5.5.4 Reasoning with Assumptions and Horn Clauses
    5.6 Complete Knowledge Assumption
    5.6.1 Non-monotonic Reasoning
    5.6.2 Proof Procedures for Negation as Failure
    5.7 Abduction
    5.8 Causal Models
    5.9 Review
    5.10 References and Further Reading
    5.11 Exercises
    Chapter 6: Planning with Certainty
    6.1 Representing States, Actions, and Goals
    6.1.1 Explicit State-Space Representation
    6.1.2 The STRIPS Representation
    6.1.3 Feature-Based Representation of Actions
    6.1.4 Initial States and Goals
    6.2 Forward Planning
    6.3 Regression Planning
    6.4 Planning as a CSP
    6.4.1 Action Features
    6.5 Partial-Order Planning
    6.6 Review
    6.7 References and Further Reading
    6.8 Exercises
    Chapter 7: Supervised Machine Learning
    7.1 Learning Issues
    7.2 Supervised Learning
    7.2.1 Evaluating Predictions
    7.2.2 Types of Errors
    7.3 Basic Models for Supervised Learning
    7.3.1 Learning Decision Trees
    7.3.2 Linear Regression and Classification
    7.4 Overfitting
    7.4.2 Regularization
    7.4.3 Cross Validation
    7.5 Neural Networks and Deep Learning
    7.6 Composite Models
    7.6.1 Random Forests
    7.6.2 Ensemble Learning
    7.7 Case-Based Reasoning
    7.8 Learning as Refining the Hypothesis Space
    7.8.1 Version-Space Learning
    7.8.2 Probably Approximately Correct Learning
    7.9 Review
    7.10 References and Further Reading
    7.11 Exercises
    Part III Reasoning, Learning and Acting with Uncertainty
    Chapter 8: Reasoning with Uncertainty
    8.1 Probability
    8.1.1 Semantics of Probability
    8.1.2 Axioms for Probability
    8.1.3 Conditional Probability
    8.1.4 Expected Values
    8.2 Independence
    8.3 Belief Networks
    8.3.1 Observations and Queries
    8.3.2 Constructing Belief Networks
    8.4 Probabilistic Inference
    8.4.1 Variable Elimination for Belief Networks
    8.4.2 Representing Conditional Probabilities and Factors
    8.5 Sequential Probability Models
    8.5.1 Markov Chains
    8.5.2 Hidden Markov Models
    8.5.3 Algorithms for Monitoring and Smoothing
    8.5.4 Dynamic Belief Networks
    8.5.5 Time Granularity
    8.6 Stochastic Simulation
    8.6.1 Sampling from a Single Variable
    8.6.2 Forward Sampling in Belief Networks
    8.6.3 Rejection Sampling
    8.6.4 Likelihood Weighting
    8.6.5 Importance Sampling
    8.6.6 Particle Filtering
    8.6.7 Markov Chain Monte Carlo
    8.7 Review
    8.8 References and Further Reading
    8.9 Exercises
    Chapter 9: Planning with Uncertainty
    9.1 Preferences and Utility
    9.1.1 Axioms for Rationality
    9.1.2 Factored Utility
    9.1.3 Prospect Theory
    9.2 One-Off Decisions
    9.2.1 Single-Stage Decision Networks
    9.3 Sequential Decisions
    9.3.1 Decision Networks
    9.3.2 Policies
    9.3.3 Variable Elimination for Decision Networks
    9.4 The Value of Information and Control
    9.5 Decision Processes
    9.5.1 Policies
    9.5.2 Value Iteration
    9.5.3 Policy Iteration
    9.5.4 Dynamic Decision Networks
    9.5.5 Partially Observable Decision Processes
    9.6 Review
    9.7 References and Further Reading
    9.8 Exercises
    Chapter 10: Learning with Uncertainty
    10.1 Probabilistic Learning
    10.1.1 Learning Probabilities
    10.1.2 Probabilistic Classifiers
    10.1.3 MAP Learning of Decision Trees
    10.1.4 Description Length
    10.2 Unsupervised Learning
    10.2.1 k-Means
    10.2.2 Expectation Maximization for Soft Clustering
    10.3 Learning Belief Networks
    10.3.1 Learning the Probabilities
    10.3.2 Hidden Variables
    10.3.3 Missing Data
    10.3.4 Structure Learning
    10.3.5 General Case of Belief Network Learning
    10.4 Bayesian Learning
    10.5 Review
    10.6 References and Further Reading
    10.7 Exercises
    Chapter 11; Multiagent Systems
    11.1 Multiagent Framework
    11.2 Representations of Games
    11.2.1 Normal Form Games
    11.2.2 Extensive Form of a Game
    11.2.3 Multiagent Decision Networks
    11.3 Computing Strategies with Perfect Information
    11.4 Reasoning with Imperfect Information
    11.4.1 Computing Nash Equilibria
    11.5 Group Decision Making
    11.6 Mechanism Design
    11.7 Review
    11.8 References and Further Reading
    11.9 Exercises
    Chapter 12; Learning to Act
    12.1 Reinforcement Learning Problem
    12.2 Evolutionary Algorithms
    12.3 Temporal Differences
    12.4 Q-learning
    12.5 Exploration and Exploitation
    12.6 Evaluating Reinforcement Learning Algorithms
    12.7 On-Policy Learning
    12.8 Model-Based Reinforcement Learning
    12.9 Reinforcement Learning with Features
    12.9.1 SARSA with Linear Function Approximation
    12.10 Multiagent Reinforcement Learning
    12.10.1 Perfect-Information Games
    12.10.2 Learning to Coordinate
    12.11 Review
    12.12 References and Further Reading
    12.13 Exercises
    Part IV Reasoning, Learning and Acting with Individuals and Relations
    Chapter 13: Individuals and Relations
    13.1 Exploiting Relational Structure
    13.2 Symbols and Semantics
    13.3 Datalog: A Relational Rule Language
    13.3.2 Interpreting Variables
    13.3.3 Queries with Variables
    13.4 Proofs and Substitutions
    13.4.1 Instances and Substitutions
    13.4.2 Bottom-up Procedure with Variables
    13.4.3 Unification
    13.4.4 Definite Resolution with Variables
    13.5 Function Symbols
    13.5.1 Proof Procedures with Function Symbols
    13.6 Applications in Natural Language
    13.6.1 Using Definite Clauses for Context-Free Grammars
    13.6.5 Enforcing Constraints
    13.6.6 Building a Natural Language Interface to a Database
    13.6.7 Limitations
    13.7 Equality
    13.7.1 Allowing Equality Assertions
    13.7.2 Unique Names Assumption
    13.8 Complete Knowledge Assumption
    13.8.1 Complete Knowledge Assumption Proof Procedures
    13.9 Review
    13.10 References and Further Reading
    13.11 Exercises
    Chapter 14: Ontologies and Knowledge-Based Systems
    14.1 Knowledge Sharing
    14.2 Flexible Representations
    14.2.1 Choosing Individuals and Relations
    14.2.2 Graphical Representations
    14.2.3 Classes
    14.3 Ontologies and Knowledge Sharing
    14.3.1 Uniform Resource Identifiers
    14.3.2 Description Logic
    14.3.3 Top-Level Ontologies
    14.4 Implementing Knowledge-Based Systems
    14.4.1 Base Languages and Metalanguages
    14.4.2 A Vanilla Meta-Interpreter
    14.4.3 Expanding the Base Language
    14.4.4 Depth-Bounded Search
    14.4.5 Meta-Interpreter to Build Proof Trees
    14.4.6 Delaying Goals
    14.5 Review
    14.6 References and Further Reading
    14.7 Exercises
    Chapter 15: Relational Planning, Learning,and Probabilistic Reasoning
    15.1 Planning with Individuals and Relations
    15.1.1 Situation Calculus
    15.1.2 Event Calculus
    15.2 Relational Learning
    15.2.1 Structure Learning: Inductive Logic Programming
    15.2.2 Learning Hidden Properties: Collaborative Filtering
    15.3 Statistical Relational Artificial Intelligence
    15.3.1 Relational Probabilistic Models
    15.4 Review
    15.5 References and Further Reading
    15.6 Exercises
    Part V Retrospect and Prospect
    Chapter 16: Retrospect and Prospect
    16.1 Dimensions of Complexity Revisited
    16.2 Social and Ethical Consequences
    16.3 References and Further Reading
    16.4 Exercises
    Appendix A: Mathematical Preliminaries and Notation
    A.1 Discrete Mathematics
    A.2 Functions, Factors and Arrays
    A.3 Relations and the Relational Algebra
    References
    Index

    Reviews

    There are no reviews yet.

    Be the first to review “Test Bank for Artificial Intelligence: Foundations of Computational Agents (2nd Edition) by David L. Poole”

    Additional Information


    Resource Type:

    Ebook Title:

    Authors:

    Publisher: