Cross-Disciplinary aspects of Exploration in Robotics, Reinforcement Learning and Search
Exploration is a fundamental capability for intelligent agents, enabling them to operate effectively in the face of uncertainty, gather information, and adapt to new environments. Despite being studied extensively in fields as diverse as Reinforcement Learning (RL), Search Algorithms (SA), and Robotics (ROB), the research communities around these areas remain largely disconnected. This workshop aims to bridge these communities by developing a unified perspective on exploration, examining both the shared principles and the domain-specific challenges. Key topics include the exploration–exploitation dilemma, safe and efficient exploration in real and simulated environments, open-world and continual exploration, tactile and multimodal exploration, and the role of large models, artificial curiosity, and meta-learning in guiding exploratory behavior. By bringing together researchers from machine learning, robotics, planning, and cognitive sciences, this workshop seeks to catalyze new cross-disciplinary approaches to exploration and lay the foundation for more adaptive, resilient, and autonomous agents.
Main objective: The main objective of this workshop is to bring together researchers from RL, SA, and ROB communities to discuss exploration as a unifying challenge across sequential decision-making, planning, and real-world robotics. While exploration has been central to advances in each of these fields, there has been limited interaction between them. RL research has focused on balancing exploration and exploitation in learning policies; SA has developed systematic and heuristic-driven search strategies; and AR has addressed the physical and safety constraints of navigating unknown environments. By fostering dialogue across these communities, this workshop aims to uncover synergies, compare methodologies, and explore opportunities for hybrid approaches.
Discussion Questions
- How can we formalize a unifying framework for exploration that spans reinforcement learning, search, and autonomous robotics?
- What are effective strategies to balance exploration and exploitation across domains, and how do constraints (safety, resources, horizon length) shape them?
- How can LLMs, attention mechanisms, or artificial curiosity guide efficient exploration in open-ended environments?
- What role do abstractions, hierarchies, and diversity measures play in enabling scalable exploration?
- How can we ensure safe, reset-free, and physically grounded exploration strategies that bridge simulation and reality?
- What benchmarks and metrics are needed to evaluate exploration across different communities?
Workshop Topics
- Exploration vs. exploitation dilemma
- Exploration in reinforcement learning, search algorithms, and autonomous robotics
- Safe exploration and exploration for Sim2Real
- Large language model–guided exploration
- Artificial curiosity and intrinsic motivation
- Exploration in hierarchical and abstract worlds
- Active learning and proactive querying
- Evolutionary algorithms for exploration
- Reward shaping, hacking, and diversity measures
- Attention and pruning mechanisms
- Exploration in human–robot interaction
- Tactile exploration and manipuation in a clutter
- Exploration in Continual and reset-free learning
- Horizon-aware exploration and planning for exploration
- Meta-learning and learning-to-explore
- Open-ended, open-world exploration
- Exploration for Visual question answering