XSS filter evasion using reinforcement learning to assist cross-site scripting testing

  • Mondal B
  • Banerjee A
  • Gupta S
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Machine learning and deep learning are widely utilized and highly effective in attack classifiers. Little research has been undertaken on detecting and protecting cross-site scripting, leaving artificial intelligence systems susceptible to adversarial assaults (XSS). It is crucial to develop a mechanism for increasing the algorithm's resilience to assault. This study intends to utilize reinforcement learning to enhance XSS detection and adversarial combat attacks. Before mining the detection model's hostile inputs, the model's information is extracted using a reinforcement learning framework. Second, the detection technique is simultaneously trained using an adversarial strategy. Every cycle, the classification method is educated with freshly discovered harmful data. The proposed XSS model effectively mines destructive inputs missed by either black-box or white-box detection systems during the experimental phase. It is possible to train assault and detection models to enhance their capacity to protect themselves, leading to a lower rate of escape due to this training.

Cite

CITATION STYLE

APA

Mondal, B., Banerjee, A., & Gupta, S. (2022). XSS filter evasion using reinforcement learning to assist cross-site scripting testing. International Journal of Health Sciences, 11779–11793. https://doi.org/10.53730/ijhs.v6ns2.8167

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free