Increasing the number of cores is one of the most effective methods to enhance performance. However, an extensive experimental study on mobile edge computing (e.g., Android devices) indicates that the memory management system has gradually become a key performance bottleneck. Studies on improving memory management mainly focus on exploring the trade-off between avoiding fragmentation and improving allocation efficiency. From our previous research, we know that the fragmentation is no longer a crucial bottleneck; instead, inter- and intra-thread behavior should be focused on, and thus, we introduce memory management based on thread behaviors (MMBTB). Unfortunately, it lacks a unified optimization program interface and good architecture. Consequently, in this paper, we propose a memory resource management at operating system (OS) layer of mobile edge computing, called the thread-oriented memory management layer (TOMML) to address this problem, which follows the microkernel architecture pattern and can meet the user's requirements for selecting plug-ins to achieve different optimization goals. This paper is divided into several sections as follows. First, we demonstrate the efficiency of TOMML through theoretical simulation and experimentation. The experimental result is that TOMML can improve memory allocation efficiency by 12%-20%. Furthermore, we introduce a plug-in to save power, which can further promote 6%-25% bank free compared with previous excellent research.
CITATION STYLE
Zhu, Z., Wu, F., Cao, J., Li, X., & Jia, G. (2019). A Thread-Oriented Memory Resource Management Framework for Mobile Edge Computing. IEEE Access, 7, 45881–45890. https://doi.org/10.1109/ACCESS.2019.2909642
Mendeley helps you to discover research relevant for your work.