Machine learning and deep learning are widely utilized and highly effective in attack classifiers. Little research has been undertaken on detecting and protecting cross-site scripting, leaving artificial intelligence systems susceptible to adversarial assaults (XSS). It is crucial to develop a mechanism for increasing the algorithm's resilience to assault. This study intends to utilize reinforcement learning to enhance XSS detection and adversarial combat attacks. Before mining the detection model's hostile inputs, the model's information is extracted using a reinforcement learning framework. Second, the detection technique is simultaneously trained using an adversarial strategy. Every cycle, the classification method is educated with freshly discovered harmful data. The proposed XSS model effectively mines destructive inputs missed by either black-box or white-box detection systems during the experimental phase. It is possible to train assault and detection models to enhance their capacity to protect themselves, leading to a lower rate of escape due to this training.
CITATION STYLE
Mondal, B., Banerjee, A., & Gupta, S. (2022). XSS filter evasion using reinforcement learning to assist cross-site scripting testing. International Journal of Health Sciences, 11779–11793. https://doi.org/10.53730/ijhs.v6ns2.8167
Mendeley helps you to discover research relevant for your work.