Energy-Efficient Split Learning for Fine-Tuning Large Language Models in Edge Networks

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this letter, we propose an energy-efficient split learning (SL) framework for fine-tuning large language models (LLMs) using geo-distributed personal data at the network edge, where LLMs are split and alternately across massive mobile devices and an edge server. Considering the device heterogeneity and channel dynamics in edge networks, a Cut lAyer and computing Resource Decision (CARD) algorithm is developed to minimize training delay and energy consumption. Simulation results demonstrate that the proposed approach reduces the average training delay and server’s energy consumption by 70.8% and 53.1%, compared to the benchmarks, respectively.

Cite

CITATION STYLE

APA

Li, Z., Wu, S., Li, L., & Zhang, S. (2025). Energy-Efficient Split Learning for Fine-Tuning Large Language Models in Edge Networks. IEEE Networking Letters, 7(3), 176–180. https://doi.org/10.1109/LNET.2025.3530430

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free