Generative Reinforcement Learning Content

类别代表方法数学本质优势劣势 / 注意点
Flow-based (Flow Matching)Q-Flow, Value Flows, FM-RLODE-based deterministic mapping with exact likelihood可解释性强,训练稳定,连续值传播自然对噪声建模能力较弱,梯度反传复杂
Diffusion-basedDiffusion Policy, Diffusion Reward Model, DDPM-RLStochastic SDE, denoising likelihood强噪声鲁棒性,可生成多模态 reward训练成本高,推理慢
VAE-basedRewardVAE, ValueVAELatent variable model, amortized inference结构简单,可快速近似 reward 分布难建高保真 reward landscape,常模式坍缩
Energy-based Models (EBM)Value EBM, Reward EBMUnnormalized density modeling能表达复杂能量面,配合 RL 理论自然采样困难,训练需 MCMC/contrastive loss
GAN-basedGAIL, AIRL, RewardGANImplicit generative model经典对抗式 IRL 框架reward 信号不稳定,缺少显式likelihood
Normalizing Flows (NF)RealNVP, MAF, Glow-RLInvertible deterministic mapping精确对数似然,可用于 reward density容易受 Jacobian 限制,不适合复杂分布
Score-based modelsScore Matching, EDM, Denoising Score RLLearn ∇log p(x) (energy gradients)与 reward gradient 建模天然契合训练复杂,需要SDE推导支持