2025

Optimal Boost Design for Auto-bidding Mechanism with Publisher Quality Constraints
Optimal Boost Design for Auto-bidding Mechanism with Publisher Quality Constraints

Huanyu Yan, Yu Huo, Min Lu, Weitong Ou, Xingyan Shi, Ruihe Shi, Xiaoying Tang# (# corresponding author)

Submitted to Association for the Advancement of Artificial Intelligence (AAAI)

Online bidding is crucial in mobile ecosystems, enabling real-time ad allocation across billions of devices to optimize performance and user experience. Improving ad allocation efficiency is a long-standing research problem, as it directly enhances the economic outcomes for all participants in advertising platforms. This paper investigates the design of optimal boost factors in online bidding while incorporating quality value (the impact of displayed ads on publishers' long-term benefits). To address the divergent interests on quality, we establish a three-party auction framework with a unified welfare metric of advertiser and publisher. Within this framework, we derive the theoretical efficiency lower bound for C-competitive boost in second-price single-slot auctions, then design a novel quality-involved Boosting (q-Boost) algorithm for computing the optimal boost factor. Experimental validation on Alibaba's public dataset (AuctionNet) demonstrates 2%-6% welfare improvements over conventional approaches, proving our method's effectiveness in real-world settings.

Optimal Boost Design for Auto-bidding Mechanism with Publisher Quality Constraints

Huanyu Yan, Yu Huo, Min Lu, Weitong Ou, Xingyan Shi, Ruihe Shi, Xiaoying Tang# (# corresponding author)

Submitted to Association for the Advancement of Artificial Intelligence (AAAI)

Online bidding is crucial in mobile ecosystems, enabling real-time ad allocation across billions of devices to optimize performance and user experience. Improving ad allocation efficiency is a long-standing research problem, as it directly enhances the economic outcomes for all participants in advertising platforms. This paper investigates the design of optimal boost factors in online bidding while incorporating quality value (the impact of displayed ads on publishers' long-term benefits). To address the divergent interests on quality, we establish a three-party auction framework with a unified welfare metric of advertiser and publisher. Within this framework, we derive the theoretical efficiency lower bound for C-competitive boost in second-price single-slot auctions, then design a novel quality-involved Boosting (q-Boost) algorithm for computing the optimal boost factor. Experimental validation on Alibaba's public dataset (AuctionNet) demonstrates 2%-6% welfare improvements over conventional approaches, proving our method's effectiveness in real-world settings.

FedGF: Layer-Wise Federated Learning with Group Fairness Guarantees
FedGF: Layer-Wise Federated Learning with Group Fairness Guarantees

Yu Huo*, Yating Li*, Xiaoying Tang# (* equal contribution, # corresponding author)

International Conference on Intelligent Computing (ICIC) 2025 Oral

Federated Learning (FL) enables collaborative training without sharing raw data but often suffers fairness issues under non-IID distributions. Prior work targets client-level fairness yet overlooks demographic-group biases. We propose FedGF, a layer-wise method that embeds demographic-parity constraints into each layer’s descent direction, jointly optimizing accuracy, client fairness, and group fairness. Extensive experiments on benchmark datasets demonstrate that FedGF reduces group accuracy gaps by 78% compared to state-of-the-art methods while maintaining comparable model performance. Our method establishes new benchmarks for both client fairness (0.0862 fairness indicator on FMNIST) and group fairness (0.0002 demographic parity difference on CIFAR-10), highlighting its effectiveness in creating more equitable federated learning systems.

FedGF: Layer-Wise Federated Learning with Group Fairness Guarantees

Yu Huo*, Yating Li*, Xiaoying Tang# (* equal contribution, # corresponding author)

International Conference on Intelligent Computing (ICIC) 2025 Oral

Federated Learning (FL) enables collaborative training without sharing raw data but often suffers fairness issues under non-IID distributions. Prior work targets client-level fairness yet overlooks demographic-group biases. We propose FedGF, a layer-wise method that embeds demographic-parity constraints into each layer’s descent direction, jointly optimizing accuracy, client fairness, and group fairness. Extensive experiments on benchmark datasets demonstrate that FedGF reduces group accuracy gaps by 78% compared to state-of-the-art methods while maintaining comparable model performance. Our method establishes new benchmarks for both client fairness (0.0862 fairness indicator on FMNIST) and group fairness (0.0002 demographic parity difference on CIFAR-10), highlighting its effectiveness in creating more equitable federated learning systems.