RLNVR: Reinforcement Learning from Non-Verified Real-World Rewards
This paper introduces RLNVR (Reinforcement Learning from Non-Verified Rewards), a framework for training language models using noisy, real-world feedback signals without requiring explicit human verification. Traditional RLHF requires expensive, verified reward signals that are impractical in many real-world domains. RLNVR addresses this challenge through baseline normalization and semantic similarity-based reward transfer.
We demonstrate RLNVR through Walter, a prototype system that optimizes social media content generation using actual engagement data from Bluesky. Our experimental results show significant improvements in content quality and training stability, with comprehensive evaluation planned for future work.
Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning language models with human preferences. However, RLHF requires high-quality, explicitly verified reward signals that are expensive to obtain at scale. Many real-world applications lack access to such verification, presenting sparse, noisy, and unverified reward signals that traditional RLHF cannot effectively utilize. Reinforcement Learning from Non-Verified Rewards (RLNVR) is a framework for training language models using noisy, real-world feedback signals without requiring explicit human verification. Our approach addresses the fundamental challenge of learning from unverified rewards through several key innovations:
- Baseline Normalization: Accounts for user variability by normalizing rewards relative to user-specific baselines
2.Semantic Similarity Transfer: Enables learning across related scenarios using semantic embeddings
3.Modular Framework Design: Demonstrates generalizability across different RL algorithms
We demonstrate RLNVR’s effectiveness through a real-world application: training language models to generate engaging social media content using actual engagement metrics from Bluesky. Our implementation combines Group Sequence Policy Optimization (GSPO) with Unsupervised Environment Design (UED) to create a robust training system that handles noisy reward signals while maintaining training stability.