TEST - Catálogo BURRF
   

Adaptive Representations for Reinforcement Learning / by Shimon Whiteson.

Por: Colaborador(es): Tipo de material: TextoTextoSeries Studies in Computational Intelligence ; 291Editor: Berlin, Heidelberg : Springer Berlin Heidelberg, 2010Descripción: 133 páginas 11 ilustraciones en color. recurso en líneaTipo de contenido:
  • texto
Tipo de medio:
  • computadora
Tipo de portador:
  • recurso en línea
ISBN:
  • 9783642139321
Formatos físicos adicionales: Edición impresa:: Sin títuloClasificación LoC:
  • Q342
Recursos en línea:
Contenidos:
Part 1 Introduction -- Part 2 Reinforcement Learning -- Part 3 On-Line Evolutionary Computation -- Part 4 Evolutionary Function Approximation -- Part 5 Sample-Efficient Evolutionary Function Approximation -- Part 6 Automatic Feature Selection for Reinforcement Learning -- Part 7 Adaptive Tile Coding -- Part 8 RelatedWork -- Part 9 Conclusion -- Part 10 Statistical Significance.
Resumen: This book presents new algorithms for reinforcement learning, a form of machine learning in which an autonomous agent seeks a control policy for a sequential decision task. Since current methods typically rely on manually designed solution representations, agents that automatically adapt their own representations have the potential to dramatically improve performance. This book introduces two novel approaches for automatically discovering high-performing representations. The first approach synthesizes temporal difference methods, the traditional approach to reinforcement learning, with evolutionary methods, which can learn representations for a broad class of optimization problems. This synthesis is accomplished by customizing evolutionary methods to the on-line nature of reinforcement learning and using them to evolve representations for value function approximators. The second approach automatically learns representations based on piecewise-constant approximations of value functions. It begins with coarse representations and gradually refines them during learning, analyzing the current policy and value function to deduce the best refinements. This book also introduces a novel method for devising input representations. This method addresses the feature selection problem by extending an algorithm that evolves the topology and weights of neural networks such that it evolves their inputs too. In addition to introducing these new methods, this book presents extensive empirical results in multiple domains demonstrating that these techniques can substantially improve performance over methods with manual representations.
Valoración
    Valoración media: 0.0 (0 votos)
No hay ítems correspondientes a este registro

Springer eBooks

Part 1 Introduction -- Part 2 Reinforcement Learning -- Part 3 On-Line Evolutionary Computation -- Part 4 Evolutionary Function Approximation -- Part 5 Sample-Efficient Evolutionary Function Approximation -- Part 6 Automatic Feature Selection for Reinforcement Learning -- Part 7 Adaptive Tile Coding -- Part 8 RelatedWork -- Part 9 Conclusion -- Part 10 Statistical Significance.

This book presents new algorithms for reinforcement learning, a form of machine learning in which an autonomous agent seeks a control policy for a sequential decision task. Since current methods typically rely on manually designed solution representations, agents that automatically adapt their own representations have the potential to dramatically improve performance. This book introduces two novel approaches for automatically discovering high-performing representations. The first approach synthesizes temporal difference methods, the traditional approach to reinforcement learning, with evolutionary methods, which can learn representations for a broad class of optimization problems. This synthesis is accomplished by customizing evolutionary methods to the on-line nature of reinforcement learning and using them to evolve representations for value function approximators. The second approach automatically learns representations based on piecewise-constant approximations of value functions. It begins with coarse representations and gradually refines them during learning, analyzing the current policy and value function to deduce the best refinements. This book also introduces a novel method for devising input representations. This method addresses the feature selection problem by extending an algorithm that evolves the topology and weights of neural networks such that it evolves their inputs too. In addition to introducing these new methods, this book presents extensive empirical results in multiple domains demonstrating that these techniques can substantially improve performance over methods with manual representations.

Para consulta fuera de la UANL se requiere clave de acceso remoto.

Universidad Autónoma de Nuevo León
Secretaría de Extensión y Cultura - Dirección de Bibliotecas @
Soportado en Koha