Sugiyama, M: Statistical Reinforcement Learning

Modern Machine Learning Approaches

Masashi Sugiyama

Buch (gebundene Ausgabe, Englisch)
Buch (gebundene Ausgabe, Englisch)
Fr. 164.00
Fr. 164.00
inkl. gesetzl. MwSt.
inkl. gesetzl. MwSt.
Versandfertig innert 6 - 9 Werktagen Versandkostenfrei
Versandfertig innert 6 - 9 Werktagen
Versandkostenfrei

Weitere Formate

Taschenbuch

Fr. 74.90

Accordion öffnen

gebundene Ausgabe

Fr. 164.00

Accordion öffnen

eBook (PDF)

Fr. 58.90

Accordion öffnen

Beschreibung

Reinforcement learning is a mathematical framework for developing computer agents that can learn an optimal behavior by relating generic reward signals with its past actions. With numerous successful applications in business intelligence, plant control, and gaming, the RL framework is ideal for decision making in unknown environments with large amounts of data.

Supplying an up-to-date and accessible introduction to the field, Statistical Reinforcement Learning: Modern Machine Learning Approaches presents fundamental concepts and practical algorithms of statistical reinforcement learning from the modern machine learning viewpoint. It covers various types of RL approaches, including model-based and model-free approaches, policy iteration, and policy search methods.



  • Covers the range of reinforcement learning algorithms from a modern perspective

  • Lays out the associated optimization problems for each reinforcement learning scenario covered

  • Provides thought-provoking statistical treatment of reinforcement learning algorithms

The book covers approaches recently introduced in the data mining and machine learning fields to provide a systematic bridge between RL and data mining/machine learning researchers. It presents state-of-the-art results, including dimensionality reduction in RL and risk-sensitive RL. Numerous illustrative examples are included to help readers understand the intuition and usefulness of reinforcement learning techniques.

This book is an ideal resource for graduate-level students in computer science and applied statistics programs, as well as researchers and engineers in related fields.

Produktdetails

Einband gebundene Ausgabe
Seitenzahl 206
Erscheinungsdatum 05.06.2015
Sprache Englisch
ISBN 978-1-4398-5689-5
Verlag Taylor & Francis Ltd.
Maße (L/B/H) 24.1/15.6/1.7 cm
Gewicht 430 g
Abbildungen 3 Tables, black and white 114 Illustrations, black and white

Kundenbewertungen

Es wurden noch keine Bewertungen geschrieben.
  • Artikelbild-0
  • Introduction to Reinforcement LearningReinforcement LearningMathematical FormulationStructure of the Book Model-Free Policy Iteration Model-Free Policy Search Model-Based Reinforcement LearningMODEL-FREE POLICY ITERATIONPolicy Iteration with Value Function ApproximationValue Functions State Value Functions State-Action Value FunctionsLeast-Squares Policy Iteration Immediate-Reward Regression Algorithm Regularization Model SelectionRemarksBasis Design for Value Function ApproximationGaussian Kernels on Graphs MDP-Induced Graph Ordinary Gaussian Kernels Geodesic Gaussian KernelsExtension to Continuous State SpacesIllustration Setup Geodesic Gaussian Kernels Ordinary Gaussian Kernels Graph-Laplacian Eigenbases Diffusion WaveletsNumerical Examples Robot-Arm Control Robot-Agent NavigationRemarksSample Reuse in Policy Iteration FormulationOff-Policy Value Function Approximation Episodic Importance Weighting Per-Decision Importance Weighting Adaptive Per-Decision Importance Weighting IllustrationAutomatic Selection of Flattening Parameter Importance-Weighted Cross-Validation IllustrationSample-Reuse Policy Iteration Algorithm IllustrationNumerical Examples Inverted Pendulum Mountain Car RemarksActive Learning in Policy IterationEfficient Exploration with Active Learning Problem Setup Decomposition of Generalization Error Estimation of Generalization Error Designing Sampling Policies IllustrationActive Policy Iteration Sample-Reuse Policy Iteration with Active Learning IllustrationNumerical ExamplesRemarksRobust Policy IterationRobustness and Reliability in Policy Iteration Robustness ReliabilityLeast Absolute Policy Iteration Algorithm Illustration PropertiesNumerical ExamplesPossible Extensions Huber Loss Pinball Loss Deadzone-Linear Loss Chebyshev Approximation Conditional Value-At-RiskRemarksMODEL-FREE POLICY SEARCH Direct Policy Search by Gradient Ascent FormulationGradient Approach Gradient Ascent Baseline Subtraction for Variance Reduction Variance Analysis of Gradient EstimatorsNatural Gradient Approach Natural Gradient Ascent IllustrationApplication in Computer Graphics: Artist Agent Sumie Paining Design of States, Actions, and Immediate Rewards Experimental ResultsRemarksDirect Policy Search by Expectation-Maximization Expectation-Maximization ApproachSample Reuse Episodic Importance Weighting Per-Decision Importance Weight Adaptive Per-Decision Importance Weighting Automatic Selection of Flattening Parameter Reward-Weighted Regression with Sample ReuseNumerical ExamplesRemarksPolicy-Prior SearchFormulationPolicy Gradients with Parameter-Based Exploration Policy-Prior Gradient Ascent Baseline Subtraction for Variance Reduction Variance Analysis of Gradient Estimators Numerical ExamplesSample Reuse in Policy-Prior Search Importance Weighting Variance Reduction by Baseline Subtraction Numerical ExamplesRemarksMODEL-BASED REINFORCEMENT LEARNING Transition Model EstimationConditional Density Estimation Regression-Based Approach Q-Neighbor Kernel Density Estimation Least-Squares Conditional Density EstimationModel-Based Reinforcement LearningNumerical Examples Continuous Chain Walk Humanoid Robot ControlRemarksDimensionality Reduction for Transition Model Estimation Sufficient Dimensionality ReductionSquared-Loss Conditional Entropy Conditional Independence Dimensionality Reduction with SCE Relation to Squared-Loss Mutual InformationNumerical Examples Artificial and Benchmark Datasets Humanoid RobotRemarksReferences Index