Algorithm for improving interpretability of support vector models for anomaly detection in network traffic
This paper is devoted to enhancing the development of an algorithm aimed at improving the interpretability of machine learning models used for detecting anomalies in network traffic, which is critical for modern cybersecurity systems. The focus is on one-class support vector machine (SVM) models, wh...
Saved in:
| Published in: | Проблеми керування та інформатики |
|---|---|
| Date: | 2025 |
| Main Authors: | , , |
| Format: | Article |
| Language: | English |
| Published: |
Інститут кібернетики ім. В.М. Глушкова НАН України
2025
|
| Subjects: | |
| Online Access: | https://nasplib.isofts.kiev.ua/handle/123456789/211402 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Journal Title: | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| Cite this: | Algorithm for improving interpretability of support vector models for anomaly detection in network traffic / K. Kerimov,S. Kurbanov, Z. Azizova // Проблемы управления и информатики. — 2025. — № 3. — С. 66-73. — Бібліогр.: 5 назв. — англ. |
Institution
Digital Library of Periodicals of National Academy of Sciences of Ukraine| _version_ | 1859514134111977472 |
|---|---|
| author | Kerimov, K. Kurbanov, S. Azizova, Z. |
| author_facet | Kerimov, K. Kurbanov, S. Azizova, Z. |
| citation_txt | Algorithm for improving interpretability of support vector models for anomaly detection in network traffic / K. Kerimov,S. Kurbanov, Z. Azizova // Проблемы управления и информатики. — 2025. — № 3. — С. 66-73. — Бібліогр.: 5 назв. — англ. |
| collection | DSpace DC |
| container_title | Проблеми керування та інформатики |
| description | This paper is devoted to enhancing the development of an algorithm aimed at improving the interpretability of machine learning models used for detecting anomalies in network traffic, which is critical for modern cybersecurity systems. The focus is on one-class support vector machine (SVM) models, which are widely used for their high accuracy in anomaly detection but suffer from a lack of transparency, often being referred to as «black box» models.
Удосконалення розробки алгоритму для покращення інтерпретованості моделей машинного навчання, що використовуються для виявлення аномалій у мережевому трафіку, критично важливе для сучасних систем кібербезпеки. Головна увага приділяється однокласовим моделям опорних векторів (SVM — Support Vector Machine), які широко застосовуються завдяки високій точності виявлення аномалій, але є недостатньо прозорими, тому їх називають моделями «чорної скриньки».
|
| first_indexed | 2026-03-13T03:01:10Z |
| format | Article |
| fulltext |
© K. KERIMOV, S. KURBANOV, Z. AZIZOVA, 2025
66 ISSN 2786-6491
РОБОТИ ТА СИСТЕМИ ШТУЧНОГО ІНТЕЛЕКТУ
UDC 004.056.53
K. Kerimov, S. Kurbanov, Z. Azizova
ALGORITHM FOR IMPROVING INTERPRETABILITY
OF SUPPORT VECTOR MODELS FOR ANOMALY
DETECTION IN NETWORK TRAFFIC
Komil Kerimov
Tashkent University of Information Technologies named after Muhammad al-Khwarizmi,
Uzbekistan,
https://orcid.org/0000-0002-2907-1027
kamil@kerimov.uz
Sardor Kurbanov
Tashkent University of Information Technologies named after Muhammad al-Khwarizmi,
Uzbekistan,
Zarina Azizova
Tashkent University of Information Technologies named after Muhammad al-Khwarizmi,
Uzbekistan,
https://orcid.org/0000-0002-3729-3842
z.i.azizova18@gmail.com
This paper is devoted to enhancing the development of an algorithm aimed at im-
proving the interpretability of machine learning models used for detecting anomalies
in network traffic, which is critical for modern cybersecurity systems. The focus is on
one-class support vector machine (SVM) models, which are widely used for their high
accuracy in anomaly detection but suffer from a lack of transparency, often being re-
ferred to as «black box» models. This opacity limits their practical applicability, espe-
cially in high-stakes environments like cybersecurity, where understanding the rea-
soning behind decisions is crucial. To address this limitation, we present an
interpretable system that integrates two popular model-agnostic explanation tech-
niques: SHAP (SHapley Additive exPlanations) for global interpretability and LIME
(Local Interpretable Model-Agnostic Explanations) for local interpretability. The sys-
tem is designed to not only detect anomalous behavior in network traffic but also to
explain the model’s reasoning in both general and specific contexts. The one-class
SVM is trained on normal network traffic to learn the boundary of normal behavior.
Any traffic falling outside this boundary is classified as anomalous. The SHAP
module provides insights into the overall importance of traffic attributes (e.g., sbytes,
dbytes, dpkts, rate) across the entire dataset, while the LIME module reveals which at-
tributes and their specific values contributed to the classification of particular anoma-
lies. This dual approach allows analysts to understand both the general behavior of the
model and the specific causes of individual detections. The results show a marked im-
provement in model interpretability, helping analysts more effectively identify poten-
tial threats and respond appropriately. Although explanation methods introduce
additional computational overhead and approximate the model's internal logic, the
Міжнародний науково-технічний журнал
Проблеми керування та інформатики, 2025, № 3 67
benefits in transparency and usability outweigh these drawbacks. This research con-
tributes to the broader goal of building trustworthy AI systems and lays the foundation
for future work on specialized interpretability techniques tailored to one-class models.
Keywords: anomaly detection, one-class SVM, interpretability, SHAP, LIME,
cybersecurity, network traffic.
Introduction
In the modern world, where cybersecurity is becoming increasingly important,
timely and effective detection of anomalies in network traffic plays a key role in pro-
tecting information systems from various threats. One of the most promising approaches
is the use of SVM algorithm to train anomaly detection models on examples of normal
network traffic. Despite its high efficiency, one of the limitations of SVM is the inter-
pretability problem known as the «black box problem».
In this paper, we propose a novel technique aimed at improving the interpretability
and explainability of SVM models for anomaly detection in network traffic. Our ap-
proach is based on the application of state-of-the-art black-box explanation techniques
such as SHAP and LIME, combined with enhancements we have developed to more ef-
ficiently extract and visualize explanations of SVM model solutions.
Improving the interpretability of anomaly detection models is important because it
increases the transparency of decision making, facilitates the analysis of detected
anomalies, and promotes tighter integration of AI systems into cybersecurity processes.
Our new methodology aims to overcome the existing limitations and provide more in-
terpretable and explainable SVM-based solutions.
In this research, we will experimentally evaluate the developed methodology by
applying it to existing SVM models for anomaly detection in network traffic. We will
analyze and visualize the obtained explanations and evaluate the extent to which the in-
terpretability of these models is improved. In addition, we will discuss potential practi-
cal applications of our technique as well as possible directions for further research.
The results of this work contribute to the development of interpretable and trusted
anomaly detection systems based on machine learning techniques. Improved interpreta-
bility will enhance the defense of information systems against various types of cyberat-
tacks and malicious activity by tightly integrating human expertise and advanced artifi-
cial intelligence algorithms.
Main results
The goal is to construct a boundary around normal data that maximizes the distance
to a few potential anomalies. New observations falling inside this boundary are classi-
fied as normal, while observations outside the boundary are considered anomalous.
In the context of anomaly detection in network traffic, the SVM is trained on
examples of normal network traffic to generate a profile of normal behavior. The model
is then used to monitor incoming traffic and identify any deviations from this profile,
which may indicate potential cyberattacks, outages, or other unwanted activity.
The «black box» problem and interpretability limitations of SVM. One of the major
drawbacks of the SVM is its black-box nature. While the model can effectively detect anoma-
lies, it does not provide insight into the underlying causes of those anomalies. A standard im-
plementation of SVM produces a binary classification result (normal/anomalous), but does
not explain which features or pieces of data contributed to this decision.
Lack of interpretability becomes a serious limitation, especially in high-risk areas
such as cybersecurity. Operators and analysts need to understand the causes of anomalies
in order to make informed decisions and appropriate countermeasures. Simply knowing
that an anomaly exists is often insufficient without additional context and explanation.
As follows, improving the interpretability of SVM models is an important
challenge that will increase their applicability in anomaly detection problems.
68 ISSN 2786-6491
Dataset UNSW-NB15
Training (normal data) Test data (normal + attacks)
One-class SVM model Performance evaluation
exPlanation with SHAP exPlanation with LIME
Interpretable results
Algorithms for enhancing interpretability
We have developed an interpretable SVM model that not only detects anomalies in
network traffic, but also provides useful explanations by identifying the features or
pieces of data that most strongly influenced the model's decision for each detected
anomaly instance. This allows operators to identify quickly potential threats and take
appropriate action.
SHAP used to provide generalized explanations at the level of the entire model,
and LIME used to visualize local explanations for specific instances of anomalies.
Combining these two techniques provides a comprehensive view of the interpretability
of the SVM model.
Description of the experimental setup and data used. General scheme of the ex-
perimental setup is shown in Fig. 1.
Fig. 1
The main research results should be presented in such form: the manuscript should
be typed in the text editor Microsoft Word 2003/2007/2010, font Times New Roman 10,
line spacing 1.1; all formulas and symbols must be executed in Microsoft Equation or
MathType (main font Times New Roman,10 pt).
Dataset statistics UNSW-NB15 presented in Table 1.
Table 1
Feature Quantity
Total copies 2 861 327
Normal instances 2 540 044
Anomalous instances 321 283
Attack types 9
Number of features 49
Categorical features 4
Numerical features 45
For the experiments, 100 000 instances were selected with the following class dis-
tribution (Table 2).
Table 2
Class Quantity Share
Normal 87 000 87 %
Attack 13 000 13 %
Міжнародний науково-технічний журнал
Проблеми керування та інформатики, 2025, № 3 69
Implementation of SVM models. The parameters w and p are found by solving
the quadratic problem according to the formula
T( ) sign( ).f x w x p= −
The key hyperparameters used in our implementation of SVM are presented in the
Table 3.
Table 3
Parameter Value Description
kernel rbf Type of kernel function
nu 0.05 Upper limit of emission fraction
gamma 0.1 Kernel width for rbf
Use of SHAP and LIME. The results of applying SHAP were visualized using
silhouette plots showing feature contributions for individual examples as well as in
summary form for the entire model.
Local LIME explanations were created for the most characteristic detected anoma-
lies using decision trees as surrogate models. Visualizations of feature importance were
built from these trees.
Results
Presentation of SHAP and LIME results. The results of applying SHAP to the
trained SVM model are presented in the Table 4.
Table 4
Feature
Average contribution of
SHAP
Description of the feature
sbytes 0.312 Number of bytes transferred from the source
dbytes 0.265 Number of destination bytes transferred
dpkts 0.184 Number of destination packets
rate 0.162 Transmission speed in the connection
tcprtt –0.149 TCP turnaround time (RTT)
smsssfragments –0.122 Number of fragments in the flow from the source
dlsrcport 0.094 Source port for destination packets
djitsrc 0.081 Jitter (delay variation) of the source
sttl –0.073 TTL (Time to Live) value for the source
djitsreply –0.068 Jitter in response packets
swin 0.057 TCP source window size
dttl 0.049 TTL value for the assignment
synack –0.039 SYN-ACK packet tracking feature
dlsrchost 0.032 Masked source host value
dlsrcrv 0.028 Masked value of the remote source virtual machine
This table shows the mean values of SHAP contributions for the most important
features obtained when explaining the trained SVM model on the entire dataset. Positive
values indicate that high values of these features contributed to the classification as
anomalies, while negative values indicate the opposite.
Brief descriptions of each of the features are also provided to provide a better un-
derstanding of their meaning and relevance to the task of anomaly detection in network
traffic.
As can be seen from the Table 4, sbytes, dbytes, dpkts and rate, reflecting the
number of transmitted data, packets and connection speed, have the highest positive
contributions, which logically indicates their importance for detecting anomalous
traffic.
70 ISSN 2786-6491
Among the features with negative contributions are tcprtt, smsssfragments and sttl,
characterizing delays, packet fragmentation and lifetime, which are probably more typi-
cal of normal traffic.
The visualization and interpretation of the obtained explanations clearly show that
the SHAP and LIME methods identify the most significant features of network traffic
that influence the SVM model’s classification of specific instances as anomalies or
normal behavior.
Analysis of the aggregated SHAP results indicates that the model assigns the highest
weight in anomaly detection to attributes characterizing high values of transmitted data
(sbytes, dbytes), number of packets (dpkts) and connection speed (rate). These attributes
are often associated with various network attacks such as DDoS, port scanning and
others, which explains their high importance.
The diagram for the SVM model of one class is illustrated on the Fig. 2.
Fig. 2
On the other hand, features related to delay (tcprtt), fragmentation (smsssfragments),
and packet lifetime (sttl) tend to reduce the likelihood of classification as anomalies
because they may reflect characteristics of normal network traffic.
LIME’s local explanations also provide valuable information about the specific
reasons why certain samples were labeled as anomalous. The detection of high values
of djitsrc (source jitter), dlsrcport (source port) combined with low values of djitsreply
(response jitter) may indicate certain types of attacks or failures.
Thus, the application of SHAP and LIME has significantly improved the inter-
pretability of SVM models for the task of anomaly detection in network traffic. The
resulting explanations provide important insights into the key attributes and patterns
affecting anomaly detection, which opens up opportunities for more effective response,
countermeasure identification, and in-depth analysis of attacker behavior.
The SHAP values for key functions are presented on Fig. 3.
Fig. 3
F
ea
tu
re
total_backward_packets
total_fwd_packets
flow_duration
packet_size
0.0 0.1 0.2 0.3 0.4
SHARP value
sbytes
dbytes
0.312
0.265
dpkts 0.184
rate 0.162
tcprtt
disrcport 0.094
djitsrc 0.081
sttl
djitsreply
swin 0.057
dttl 0.049
synack
dlsrchost 0.032
dlsrcrv 0.028
smsssfragments
Міжнародний науково-технічний журнал
Проблеми керування та інформатики, 2025, № 3 71
This diagram shows the SHAP values for the key attributes used by the SVM
model to classify network traffic. It can be seen that attributes such as «flow_duration»,
«total_fwd_packets» and «total_backward_packets» have the most influence on the
model solution. This provides insight into which aspects of traffic are most relevant for
anomaly detection.
Fig. 4 demonstrates a local explanation using the LIME algorithm for a specific
example of network traffic classified as an anomaly. The visualization shows which
attributes and how strongly influenced this decision. This approach allows network
administrators to quickly understand the reasons for the classification and take appro-
priate action.
Fig. 4
A comparison of the SVM model performance with and without the included
interpretability mechanisms (SHAP and LIME) is shown in the Fig. 5. The diagram
shows an improvement in accuracy and speed of decision making due to the in-
creased interpretability of the model, which confirms the effectiveness of the pro-
posed approach.
Fig. 5
The above visual examples clearly demonstrate that the proposed technique not
only improves the interpretability of SVM models, but also contributes to faster and
more accurate anomaly detection in network traffic. These results emphasize the practi-
cal relevance of our work and its potential contribution to the development of cyber-
security systems.
Conclusion
In this research, we have developed an interpretable system for anomaly detection
in network traffic based on SVM models. The key feature of this system is the integra-
tion of explanation modules to overcome the «black box» limitations of the original
SVM models.
We have implemented two main explanation modules, a global explanation
module based on SHAP and a local explanation module using LIME. The SHAP
module provides an understanding of the general importance of various network
A
cc
u
ra
cy
0.8
Model
D
ec
is
io
n
t
im
e,
m
s
0.6
0.2
0.0
0.4
0
20
60
80
100
120
Without interpretability With SHAP & LIME
40
F
ea
tu
re
total_backward_packets
total_fwd_packets
flow_duration
packet_size
–0.10 0.00 0.10 0.20 0.30
Contribution to classification
–0.05 0.05 0.15 0.25
72 ISSN 2786-6491
traffic attributes for anomaly classification by the model. The LIME module, in
turn, enables the identification of specific attributes and their values that influenced
the classification of individual examples as anomalies. The results of the developed
modules demonstrated a significant increase in the interpretability of the SVM for
the anomaly detection task. The SHAP module highlighted the key features sbytes,
dbytes, dpkts and rate as the most important features for anomaly identification.
Local LIME explanations indicated specific combinations of features like djitsrc,
dlsrcport and djitsreply for individual detected anomalies.
The key advantage of the developed modules is that they provide interpretability
for any SVM models, regardless of their internal structure. In addition, they provide
convenient visualizations to simplify the interpretation of results.
However, the explanation modules are potentially computationally expensive and
provide only an approximation of the actual behavior of the model in local areas.
The developed interpretable system for anomaly detection opens up new possibili-
ties for its application in practical cybersecurity and monitoring tasks. Operators can
now not only detect anomalies, but also obtain critical information about their probable
causes and nature based on explanations from the SHAP and LIME modules. This en-
ables faster identification of threats, identification of countermeasures and in-depth
analysis of cyberattacks.
In the future, our research directed to develop specialized explanation methods
aimed specifically at one-class models like SVM, which can provide more accurate ex-
planations. It is also promising to embed interpretability mechanisms directly into the
training of such models. In addition, we plan to apply this knowledge to other cyber-
security tasks, and to investigate the combined use of explanations with visual analytics
techniques to simplify human interpretation of results.
Overall, the interpretable system we have developed for detecting anomalies in
network traffic makes an important contribution to improving the explainability and
transparency of machine learning algorithms used in cybersecurity.
К.Ф. Керімов, С.Н. Курбанов, З.І. Азізова
АЛГОРИТМ ПОКРАЩЕННЯ ІНТЕРПРЕТОВАНОСТІ
МОДЕЛЕЙ ОПОРНИХ ВЕКТОРІВ ДЛЯ ВИЯВЛЕННЯ
АНОМАЛІЙ У МЕРЕЖЕВОМУ ТРАФІКУ
Керімов Коміл Фікратович
Ташкентський університет інформаційних технологій імені Мухаммада аль-Хорезмі,
Узбекистан,
kamil@kerimov.uz
Курбанов Сардор Нуріддінович
Ташкентський університет інформаційних технологій імені Мухаммада аль-Хорезмі,
Узбекистан,
Азізова Заріна Ільдарівна
Ташкентський університет інформаційних технологій імені Мухаммада аль-Хорезмі,
Узбекистан,
z.i.azizova18@gmail.com
Міжнародний науково-технічний журнал
Проблеми керування та інформатики, 2025, № 3 73
Удосконалення розробки алгоритму для покращення інтерпретованості
моделей машинного навчання, що використовуються для виявлення ано-
малій у мережевому трафіку, критично важливе для сучасних систем
кібербезпеки. Головна увага приділяється однокласовим моделям опорних
векторів (SVM — Support Vector Machine), які широко застосовуються за-
вдяки високій точності виявлення аномалій, але є недостатньо прозорими,
тому їх називають моделями «чорної скриньки». Непрозорість обмежує їх
практичне застосування, особливо в системах з високими ставками, таких
як кібербезпека, де вирішальне значення має обґрунтування рішень. Для
усунення цих обмежень представлено інтерпретовану систему, яка інтегрує
два популярні методи пояснення, що не залежать від моделі: SHAP (ади-
тивні пояснення SHapley) — для глобальної інтерпретованості та LIME
(локальні інтерпретовані модельно-агностичні пояснення) — для локальної
інтерпретованості. Система розроблена не лише для виявлення аномальної
поведінки в мережевому трафіку, але й для пояснення логіки моделі як
у загальному, так і конкретному контексті. Для визначення меж нормаль-
ної поведінки однокласова SVM навчається на звичайному мережевому
трафіку. Будь-який трафік, що виходить за ці межі, класифікується як ано-
мальний. Модуль SHAP забезпечує розуміння загальної важливості атри-
бутів трафіку (наприклад, sbytes, dbytes, dpkts, rate) для всього набору да-
них, тоді як модуль LIME показує, які атрибути та їхні значення сприяли
класифікації конкретних аномалій. Такий подвійний підхід дозволяє аналі-
тикам зрозуміти як загальну поведінку моделі, так і конкретні причини ок-
ремих проявів. Результати показують помітне покращення інтерпретовано-
сті моделі, що допомагає аналітикам більш ефективно виявляти потенційні
загрози та відповідно реагувати на них. Хоча методи пояснення привносять
додаткові обчислювальні витрати та спрощують внутрішню логіку моделі,
переваги в прозорості та зручності використання значно переважують ці
недоліки. Дане дослідження сприяє створенню надійних систем штучного
інтелекту та закладає основу для майбутньої роботи над спеціалізованими
методами інтерпретованості, адаптованими до однокласових моделей.
Ключові слова: виявлення аномалій, однокласова SVM, інтерпретова-
ність, SHAP, LIME, кібербезпека, мережевий трафік.
REFERENCES
1. Chandola V., Banerjee A., Kumar V. Anomaly detection: a survey. ACM Computing Surveys.
2009. Vol. 41, N 3. Article 15. P. 1–58. DOI: https://doi.org/10.1145/1541880.1541882
2. Estimating support of a high-dimensional distribution / B. Schölkopf, J. Platt, J. Shawe-Taylor,
A. Smola, R. Williamson. Neural Computation. 2001. Vol. 13, N 7. P. 1443–1471. DOI:
10.1162/089976601750264965
3. Lundberg S.M., Lee S.-I. A unified approach to interpreting model predictions. NIPS’17: Pro-
ceedings of the 31st International Conference on Neural Information Processing Systems. USA :
Long Beach California, 2017, December 4–9. P. 4765–4774. DOI: 10.48550/arXiv.1705.07874
4. Ribeiro M.T., Singh S., Guestrin C. «Why Should I Trust You?»: explaining the predictions of
any classifier. KDD’16: The 22nd ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining. USA: San Francisco, 2016, August 13–17. P. 1135–1144. DOI:
10.1145/2939672.2939778
5. Interpretable support vector machine and its application to rehabilitation assessment .
W. Kim, H. Joe, H.-S. Kim, D.Yoon. Electronics. 2024. Vol. 13, N 18. ID: 3584. DOI:10.3390/
electronics13183584
Submitted 11.11.2024
Revised 10.12.2024
https://doi.org/10.1145/1541880.1541882
|
| id | nasplib_isofts_kiev_ua-123456789-211402 |
| institution | Digital Library of Periodicals of National Academy of Sciences of Ukraine |
| issn | 0572-2691 |
| language | English |
| last_indexed | 2026-03-13T03:01:10Z |
| publishDate | 2025 |
| publisher | Інститут кібернетики ім. В.М. Глушкова НАН України |
| record_format | dspace |
| spelling | Kerimov, K. Kurbanov, S. Azizova, Z. 2026-01-01T19:39:59Z 2025 Algorithm for improving interpretability of support vector models for anomaly detection in network traffic / K. Kerimov,S. Kurbanov, Z. Azizova // Проблемы управления и информатики. — 2025. — № 3. — С. 66-73. — Бібліогр.: 5 назв. — англ. 0572-2691 https://nasplib.isofts.kiev.ua/handle/123456789/211402 004.056.53 10.34229/1028-0979-2025-3-6 This paper is devoted to enhancing the development of an algorithm aimed at improving the interpretability of machine learning models used for detecting anomalies in network traffic, which is critical for modern cybersecurity systems. The focus is on one-class support vector machine (SVM) models, which are widely used for their high accuracy in anomaly detection but suffer from a lack of transparency, often being referred to as «black box» models. Удосконалення розробки алгоритму для покращення інтерпретованості моделей машинного навчання, що використовуються для виявлення аномалій у мережевому трафіку, критично важливе для сучасних систем кібербезпеки. Головна увага приділяється однокласовим моделям опорних векторів (SVM — Support Vector Machine), які широко застосовуються завдяки високій точності виявлення аномалій, але є недостатньо прозорими, тому їх називають моделями «чорної скриньки». en Інститут кібернетики ім. В.М. Глушкова НАН України Проблеми керування та інформатики Роботи та системи штучного інтелекту Algorithm for improving interpretability of support vector models for anomaly detection in network traffic Алгоритм покращення інтерпретованості моделей опорних векторів для виявлення аномалій у мережевому трафіку Article published earlier |
| spellingShingle | Algorithm for improving interpretability of support vector models for anomaly detection in network traffic Kerimov, K. Kurbanov, S. Azizova, Z. Роботи та системи штучного інтелекту |
| title | Algorithm for improving interpretability of support vector models for anomaly detection in network traffic |
| title_alt | Алгоритм покращення інтерпретованості моделей опорних векторів для виявлення аномалій у мережевому трафіку |
| title_full | Algorithm for improving interpretability of support vector models for anomaly detection in network traffic |
| title_fullStr | Algorithm for improving interpretability of support vector models for anomaly detection in network traffic |
| title_full_unstemmed | Algorithm for improving interpretability of support vector models for anomaly detection in network traffic |
| title_short | Algorithm for improving interpretability of support vector models for anomaly detection in network traffic |
| title_sort | algorithm for improving interpretability of support vector models for anomaly detection in network traffic |
| topic | Роботи та системи штучного інтелекту |
| topic_facet | Роботи та системи штучного інтелекту |
| url | https://nasplib.isofts.kiev.ua/handle/123456789/211402 |
| work_keys_str_mv | AT kerimovk algorithmforimprovinginterpretabilityofsupportvectormodelsforanomalydetectioninnetworktraffic AT kurbanovs algorithmforimprovinginterpretabilityofsupportvectormodelsforanomalydetectioninnetworktraffic AT azizovaz algorithmforimprovinginterpretabilityofsupportvectormodelsforanomalydetectioninnetworktraffic AT kerimovk algoritmpokraŝennâínterpretovanostímodeleiopornihvektorívdlâviâvlennâanomalíiumereževomutrafíku AT kurbanovs algoritmpokraŝennâínterpretovanostímodeleiopornihvektorívdlâviâvlennâanomalíiumereževomutrafíku AT azizovaz algoritmpokraŝennâínterpretovanostímodeleiopornihvektorívdlâviâvlennâanomalíiumereževomutrafíku |