Best AI papers explained
Podcast tekijän mukaan Enoch H. Kang
534 Jaksot
-
Inference time alignment in continuous space
Julkaistiin: 25.5.2025 -
Efficient Test-Time Scaling via Self-Calibration
Julkaistiin: 25.5.2025 -
Conformal Prediction via Bayesian Quadrature
Julkaistiin: 25.5.2025 -
Predicting from Strings: Language Model Embeddings for Bayesian Optimization
Julkaistiin: 25.5.2025 -
Self-Evolving Curriculum for LLM Reasoning
Julkaistiin: 25.5.2025 -
Online Decision-Focused Learning in Dynamic Environments
Julkaistiin: 25.5.2025 -
FisherSFT: Data-Efficient Supervised Fine-Tuning of Language Models Using Information Gain
Julkaistiin: 25.5.2025 -
Reward Shaping from Confounded Offline Data
Julkaistiin: 25.5.2025 -
Trajectory Bellman Residual Minimization: A Simple Value-Based Method for LLM Reasoning
Julkaistiin: 25.5.2025 -
Understanding Best-of-N Language Model Alignment
Julkaistiin: 25.5.2025 -
Maximizing Acquisition Functions for Bayesian Optimization - and its relation to Gradient Descent
Julkaistiin: 24.5.2025 -
Bayesian Prompt Ensembles: Model Uncertainty Estimation for Black-Box Large Language Models
Julkaistiin: 24.5.2025 -
Prompting Strategies for Enabling Large Language Models to Infer Causation from Correlation
Julkaistiin: 24.5.2025 -
The Parallel Knowledge Gradient Method for Batch Bayesian Optimization
Julkaistiin: 24.5.2025 -
FunBO: Discovering Acquisition Functions for Bayesian Optimization with FunSearch
Julkaistiin: 24.5.2025 -
Automated Social Science: A Structural Causal Model-Based Approach
Julkaistiin: 24.5.2025 -
Causal Interpretation of Transformer Self-Attention
Julkaistiin: 24.5.2025 -
A Causal World Model Underlying Next Token Prediction: Exploring GPT in a Controlled Environment
Julkaistiin: 24.5.2025 -
Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs
Julkaistiin: 24.5.2025 -
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation
Julkaistiin: 24.5.2025
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
