Policy decomposition for evaluation performance improvement of PDP

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In conventional centralized authorization models, the evaluation performance of policy decision point (PDP) decreases obviously with the growing numbers of rules embodied in a policy. Aiming to improve the evaluation performance of PDP, a distributed policy evaluation engine called XDPEE is presented. In this engine, the unicity of PDP in the centralized authorization model is changed by increasing the number of PDPs. A policy should be decomposed into multiple subpolicies each with fewer rules by using a decomposition method, which can have the advantage of balancing the cost of subpolicies deployed to each PDP. Policy decomposition is the key problem of the evaluation performance improvement of PDPs. A greedy algorithm with O(nlgn) time complexity for policy decomposition is constructed. In experiments, the policy of the LMS, VMS, and ASMS in real applications is decomposed separately into multiple subpolicies based on the greedy algorithm. Policy decomposition guarantees that the cost of subpolicies deployed to each PDP is equal or approximately equal. Experimental results show that (1) the method of policy decomposition improves the evaluation performance of PDPs effectively and that (2) the evaluation time of PDPs reduces with the growing numbers of PDPs. © 2014 Fan Deng et al.

Cite

CITATION STYLE

APA

Deng, F., Chen, P., Zhang, L. Y., Wang, X. Q., Li, S. D., & Xu, H. (2014). Policy decomposition for evaluation performance improvement of PDP. Mathematical Problems in Engineering, 2014. https://doi.org/10.1155/2014/610278

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free