Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
43 views46 pages

Final Year Project Report

This project report presents a novel approach to detect vulnerabilities in Docker images by implementing a DevSecOps framework. It integrates static and dynamic analysis techniques to ensure continuous and automated vulnerability detection without hindering development efficiency. The solution comprises a detection engine, a DevSecOps integration module, and a real-time monitoring system, advancing container security and fostering continuous security management in organizations.

Uploaded by

Tina Guo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views46 pages

Final Year Project Report

This project report presents a novel approach to detect vulnerabilities in Docker images by implementing a DevSecOps framework. It integrates static and dynamic analysis techniques to ensure continuous and automated vulnerability detection without hindering development efficiency. The solution comprises a detection engine, a DevSecOps integration module, and a real-time monitoring system, advancing container security and fostering continuous security management in organizations.

Uploaded by

Tina Guo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

AN AUTONOMOUS INSTITUTION AFFILIATED TO VTU, BELAGAVI, ACCREDIATED by NAAC (‘A+’ Grade)

YELAHANKA, BENGALURU-560064

A Project Report on
A novel approach to detect vulnerability in docker image and through the
implementation of DevSecOps approach
Submitted in partial fulfillment of the requirement for the award of the degree of

BACHELOR OF ENGINEERING
IN
INFORMATION SCIENCE AND ENGINEERING
By
Anand Kumar Rai 1NT21IS028
Niraj Agarwal 1NT21IS103
Sanjeev S 1NT21IS140
Yashi Sehgal 1NT21IS190

Under the Guidance of

Mr. Mohan Kumar TG


Assistant Professor
Department of Information Science and Engineering
Nitte Meenakshi Institute of Technology, Bengaluru - 560064

DEPARTMENT OF INFORMATION SCIENCE AND ENGINEERING


(Accredited by NBA Tier-1)
2024 - 25
NITTE MEENAKSHI INSTITUTE OF TECHNOLOGY
(AN AUTONOMOUS INSTITUTION, AFFILIATED TO VTU, BELAGAVI)

DEPARTMENT OF INFORMATION SCIENCE AND ENGINEERING

Accredited by NBA Tier-1

CERTIFICATE

Certified that the project work entitled ”A novel approach to detect vulnerability in docker image
and through the implementation of DevSecOps approach ” carried out by Mr. Sanjeev S - USN
1NT21IS140, Mr. Anand Kumar Rai - USN 1NT21IS028, Mr. Niraj Agarwal - USN 1NT21IS103,
Ms. Yashi Sehgal - USN 1NT21IS190 bonafide students of NITTE Meenakshi Institute of Technol-
ogy in partial fulfillment for the award of Bachelor of Engineering in Information Science and Engineering
of the Visvesraya Technological University, Belagavi during the year 2024 - 25. It is certified that all cor-
rections/suggestions indicated for Internal Assessment have been incorporated in the Report deposited in the
departmental library. The project report has been approved as it satisfies the academic requirements in respect
of Project work prescribed for the said Degree.

Name and Signature of the Guide Name and Signature of the HOD Signature of the Principal
Mr. Mohan Kumar TG Dr. Mohan SG Dr. H.C Nagaraj

External Viva

Name of the examiners Signature with date

1.
2.

I
NITTE MEENAKSHI INSTITUTE OF TECHNOLOGY
(AN AUTONOMOUS INSTITUTION, AFFILIATED TO VTU, BELAGAVI)

DEPARTMENT OF INFORMATION SCIENCE AND ENGINEERING

Accredited by NBA Tier-1

DECLARATION

This is to certify that the project report entitled A novel approach to detect vulnerability in Docker
image through the implementation of DevSecOps approach which is being submitted in partial fulfill-
ment of the requirements for the award of the degree of Bachelor of Engineering in Information Science
and Engineering of Visvesvaraya Technological University, Belagavi, during the year 2024 - 25, is a
record of the authentic work carried out under the guidance of Mr. Mohan Kumar TG, Assistant Professor,
Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Bengaluru.
The project work is original and has not formed the basis for the award of any degree, diploma, fellowship, or
any other similar titles.

Signature of the Student with Date

Place: Bengaluru
Date: 18 January 2025

II
ABSTRACT

This project addresses the detection and mitigation of vulnerabilities in Docker images, which present
security risks across the software lifecycle. Containers, especially Docker, are essential in modern software de-
ployment due to their portability and efficiency. However, they introduce security challenges, with base images
often containing unpatched vulnerabilities or insecure dependencies that attackers can exploit. The proposed
solution integrates static and dynamic analysis techniques within a DevSecOps framework to detect vulnera-
bilities in containerized environments. By embedding security practices into the DevOps pipeline, the system
ensures continuous and automated vulnerability detection without slowing down development or compromis-
ing operational efficiency. The solution comprises three core components: a detection engine using up-to-date
databases to identify vulnerabilities, a DevSecOps integration module that embeds security checks into the
development pipeline, and a real-time monitoring system for continuous scanning and alerting. Together, these
components provide a scalable security framework for Dockerized applications. This project advances container
security by enabling proactive vulnerability detection and real-time threat mitigation without disrupting devel-
opment workflows, fostering a culture of continuous security management in organizations using microservices
and cloud-native technologies.

III
ACKNOWLEDGEMENT

Acknowledgments are extended to Dean Dr. V. Sridhar and Principal Dr. H.C. Nagaraj for their provision
of essential facilities and support throughout this project. Thanks are directed to our Head of the Department,
Dr. Mohan S.G., for his continuous support and motivation. Special recognition is given to our project guide,
Mr. Mohan Kumar T.G., for his insightful advice and steadfast support, which were crucial to the successful
completion of this project. Special thanks are extended to the technical staff and lab assistants for their
essential resources and assistance during the experimental phases Acknowledgment is also given to all professors
and lecturers who have imparted their knowledge and skills, providing the foundation for this project. Their
dedication to teaching has been a source of inspiration.

Sanjeev S [1NT21IS140],
Anand Kumar Rai [1NT21IS028],
Niraj Agarwal [1NT21IS103],
Yashi Sehgal [1NT21IS190].

IV
Contents

1 Introduction 1

2 Literature Review 3
2.1 Vulnerability Analysis of Official and Verified Docker Hub Images . . . . . . . . . . . . . . . . . . 3
2.2 ZeroDVS: Trace-Ability and Security Detection of Container Image Based on Inheritance Graph 3
2.3 Vulnerability Detection and Classification using DevSecOps . . . . . . . . . . . . . . . . . . . . . 4
2.4 A Study on Container Vulnerability Exploit Detection . . . . . . . . . . . . . . . . . . . . . . . . 4
2.5 Security Audit of Docker Container Images in Cloud Architecture . . . . . . . . . . . . . . . . . . 5
2.6 A Hybrid Model for Real-Time Docker Container Threat Detection and Vulnerability Analysis . 5
2.7 Native Container Security for Running Applications: A Review . . . . . . . . . . . . . . . . . . . 6
2.8 Last Line of Defense: Towards Certifying Dependable SCADA Networks by Fostering Threat
Hunting with Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.9 Secure Inter-Container Communications Using XDP/eBPF . . . . . . . . . . . . . . . . . . . . . 6
2.10 Towards an Understanding of Docker Images and Performance Consequences on Container Stor-
age Systems at Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.11 Hyperion: Impact Hardware High-Performance and Secure System for Container Networks . . . 7
2.12 A Hybrid System Call Profiling Model for Container Protection . . . . . . . . . . . . . . . . . . . 7
2.13 Cloud Migration Research: A Systematic Review . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.14 SysFlow: Towards a Programmable Zero Trust Security Architecture for Systems . . . . . . . . . 8
2.15 Container Cloud Security Vulnerability: An Empirical Analysis of Security Risks Associated with
Information Leakages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.16 Ambush from All Sides: Analyzing Security Threats for Open-Source SWE CI/CD Pipelines . . 8
2.17 Condo: Enhancing Container Isolation Through Kernel Permission Data Protection . . . . . . . 9
2.18 DIVDS: Docker Image Vulnerability Diagnostic System . . . . . . . . . . . . . . . . . . . . . . . 9
2.19 Malicious Investigation of Docker Images on Basis of Vulnerability Databases . . . . . . . . . . . 10
2.20 Monitoring Solution for Cloud-Native DevSecOps . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.21 Reflections on Trusting Docker: Invisible Malware in Continuous Integration Systems . . . . . . 10
2.22 Security Analysis of Docker Containers for ARM Architecture . . . . . . . . . . . . . . . . . . . . 11
2.23 Security Audit of Docker Container Images in Cloud Architecture . . . . . . . . . . . . . . . . . . 11
2.24 Should You Upgrade Official Docker Hub Images in Production Environments? . . . . . . . . . . 11
2.25 The Practice and Application of a Novel DevSecOps Platform on Security . . . . . . . . . . . . . 12
2.26 Vulnerability Analysis of Docker Hub Official Images and Verified Images . . . . . . . . . . . . . 12

3 Requirement Specification 13
3.1 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Development Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.2 Deployment Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.2 Deployment Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Non-Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

V
4 Framework and System Design 15
4.1 Actual Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Layer Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4 System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.4.1 Phase 1: Infrastructure Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.4.2 Phase 2: Private Git Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4.3 Phase 3: CI/CD Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5 Implementation 23
5.1 Development Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Build and Package Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.3 Deployment Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.4 Decision Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

6 Testing and Results 25


6.1 Jenkins Activation and Terraform Triggering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2 Codebase Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3 Pipeline Execution in Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.4 Compilation and Unit Testing with Maven . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.5 Code Quality Check with SonarQube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.6 Package Building and Nexus Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.7 Docker Image Generation and Push . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.8 Vulnerability Check with Aqua Trivy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.9 KubeAudit for Security Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.10 Email Notifications and Successful Build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.11 Application Deployment and Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.12 AI-Powered Vulnerability Chatbot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

7 Conclusion and Future Scope 34


7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.2 Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

VI
List of Figures

4.1 Flow chart illustrating the actual scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15


4.2 Architecture diagram explaining both layers structuring . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Overview of base analysis and package analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4 Overall System Design for Docker Image Security . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

6.1 GitHub Repository with the Full-Stack Game Codebase . . . . . . . . . . . . . . . . . . . . . . . 25


6.2 Jenkins Pipeline Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.3 Jenkins Console Output for the Board Game Pipeline . . . . . . . . . . . . . . . . . . . . . . . . 26
6.4 SonarQube Code Quality Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.5 SonarQube Code Quality Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.6 Nexus Repository with the Stored Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.7 Docker Image Generation and Push to Docker Hub . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.8 General Parameters considered for vulnerability scanning . . . . . . . . . . . . . . . . . . . . . . 29
6.9 KubeAudit Security Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.10 Email Recieved by Admit regarding the vulnerability scan . . . . . . . . . . . . . . . . . . . . . . 31
6.11 Prometheus collecting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.12 Grafana representing the data collected by prometheus . . . . . . . . . . . . . . . . . . . . . . . . 32
6.13 Grafana representing the data collected by prometheus . . . . . . . . . . . . . . . . . . . . . . . . 32
6.14 AI-Powered Vulnerability Chatbot working for chessgame image (Low Risk) . . . . . . . . . . . . 33
6.15 AI-Powered Vulnerability Chatbot working for staticgames image (High Risk) . . . . . . . . . . . 33

VII
Chapter 1

Introduction

In today’s rapidly evolving digital landscape, containerization has become a cornerstone for efficient and scalable
software deployment. Docker, a leading containerization platform, enables developers to package applications
and their dependencies into portable containers, facilitating seamless deployment across various environments.
This portability significantly reduces the ”it works on my machine” problem, ensuring consistency from develop-
ment to production. The ability to move containers between different computing environments, such as develop-
ment, testing, staging, and production, with minimal modifications is a key advantage that has accelerated the
adoption of containerized applications. Docker has proven to be an essential tool in the software development
lifecycle, streamlining the deployment process and enabling organizations to adopt microservices architectures,
which improve scalability and resource optimization. However, despite its convenience and widespread adop-
tion, Docker introduces significant security challenges that cannot be overlooked. Vulnerabilities within Docker
images can pose serious risks to the entire application lifecycle, from development to deployment and beyond.
These vulnerabilities can be exploited by malicious actors, leading to data breaches, system compromises, and
other severe security incidents. As organizations increasingly rely on containerized applications to deliver criti-
cal services, the need for robust security measures to protect these environments becomes imperative. Docker
images are often built using a range of open-source components, some of which may be outdated or have
known security flaws, and if left unchecked, these vulnerabilities can propagate throughout the lifecycle of an
application, potentially leading to devastating security breaches. The nature of containers—rapid deployment,
dynamic scaling, and isolation—further complicates the detection and mitigation of security threats, making it
even more crucial for organizations to implement proactive security strategies. Traditional security practices,
which often occur late in the development process, are insufficient to address the dynamic and fast-paced nature
of modern software development. Security testing and assessments typically take place after the application
has already been developed and is close to being deployed, leaving little room for corrective action. This ap-
proach fails to meet the needs of containerized environments, where vulnerabilities can be introduced quickly
and can propagate rapidly across distributed systems. This has led to the rise of DevSecOps, a methodology
that emphasizes the integration of security practices within the DevOps workflow, ensuring that security is not
treated as an afterthought but is instead a continuous process that begins from the earliest stages of develop-
ment. DevSecOps enables organizations to shift left in their security practices, where security is considered a
shared responsibility across development, operations, and security teams. By embedding security checks and
controls throughout the development pipeline, potential vulnerabilities can be identified and mitigated early
in the container lifecycle, significantly enhancing the overall security posture of applications. The DevSecOps
approach involves automating security scans, continuous integration of security tools, and active monitoring
of runtime environments, ensuring that the security of applications is constantly maintained, even as they are
updated and deployed. The primary focus of this project is to develop a novel approach to detect vulnerabilities
in Docker images, integrating this detection mechanism seamlessly into a DevSecOps framework. Our approach
aims to leverage both static and dynamic analysis techniques to provide comprehensive security assessments of
Docker images, addressing both known vulnerabilities and potential runtime threats. Static analysis involves
scanning the image layers for known vulnerabilities, misconfigurations, and outdated dependencies, which can
be identified based on regularly updated vulnerability databases, security advisories, and configuration best
practices. This analysis enables developers to detect potential flaws early in the development cycle, allowing
for faster remediation before the image is deployed in a production environment. In contrast, dynamic analysis
involves executing the container in a controlled environment to monitor its behavior, network activity, and
interactions with the underlying system to detect potential runtime threats, such as privilege escalation or

1
unexpected network communication. This approach provides an additional layer of security by monitoring the
live behavior of the container, ensuring that threats are detected even when they are not present in static code
analysis. Key components of our solution include a vulnerability detection engine, a DevSecOps integration
module, and a real-time monitoring system. The vulnerability detection engine scans Docker images for known
vulnerabilities using up-to-date vulnerability databases, security advisories, and configuration checks, ensuring
that the images do not contain any known security flaws. The DevSecOps integration module ensures that the
results of these scans are seamlessly incorporated into the existing DevOps pipeline, enabling continuous security
assessments without disrupting the overall development workflow. This integration is essential for maintaining
the agility of DevOps practices while incorporating proactive security measures. Finally, the real-time moni-
toring system provides ongoing evaluation of running Docker containers, alerting stakeholders to any emerging
threats, anomalous behaviors, or security violations in real time. This system allows for continuous monitoring
of containers throughout their lifecycle, from development to production, ensuring that any security incidents
are detected and addressed promptly. By adopting this comprehensive approach, we aim to foster a culture of
continuous improvement and resilience within containerized environments, where security is constantly assessed
and enhanced. Our solution not only detects vulnerabilities early but also maintains minimal performance
overhead, ensuring that the operational efficiency of the development and deployment processes is not compro-
mised. This is particularly crucial in containerized environments, where rapid scaling and frequent updates are
the norms. This project represents a significant advancement in container security, providing a scalable and
effective method to safeguard Dockerized applications in the dynamic and often vulnerable landscape of modern
software development. In summary, this project highlights the importance of integrating robust security mea-
sures within the container lifecycle and demonstrates the effectiveness of a DevSecOps approach in enhancing
the security posture of Dockerized applications. By offering continuous vulnerability detection and real-time
monitoring, we aim to mitigate potential risks, prevent security breaches, and promote a resilient and secure
application development environment. Through this solution, we hope to contribute to the ongoing efforts
to improve container security and provide organizations with the tools needed to secure their containerized
applications in an increasingly complex and vulnerable digital landscape.

2
Chapter 2

Literature Review

The literature review explores various methodologies and tools developed for vulnerability detection in Docker
images and their integration into the DevSecOps pipeline.

2.1 Vulnerability Analysis of Official and Verified Docker Hub Im-


ages
Ruchika Malhotra’s study analyzes official and verified Docker Hub images, performing vulnerability scans using
tools like Aqua, Trivy, Anchore, JFrog Xray, and Docker Scan. This research provides a comprehensive analysis
of how these tools function and evaluates their effectiveness in detecting vulnerabilities in Docker images. One
of the main points of the study is the comparison of the detection capabilities of different tools, where it is found
that the discrepancy in their results can be attributed to their varying databases, scanning methodologies, and
the type of vulnerabilities they focus on. Some tools, like Trivy and Anchore, provide extensive vulnerability
coverage due to their large vulnerability databases, while others, like JFrog Xray and Docker Scan, specialize
in certain areas, such as license compliance and specific vulnerability types. The research methodology includes
the examination of Docker images from different categories such as operating systems, application servers,
and databases. The images are scanned using each tool, and the results are compared to identify common
vulnerabilities as well as those that are detected by only a single tool. The performance overhead of these
scanning tools is also analyzed to assess their impact on the efficiency of container deployment and runtime
operations. For instance, some tools may introduce significant delays in processing, which could affect the
scalability of Docker-based systems in production environments. The study concludes that no single tool can
guarantee complete security for Docker images, and that a multi-tool approach, coupled with continuous updates
to vulnerability databases, is essential for maintaining optimal security practices. Malhotra’s findings emphasize
that security professionals must stay updated on the latest scanning tools and their capabilities to proactively
detect and mitigate vulnerabilities in Docker containers. Furthermore, the study highlights the importance of
integrating vulnerability scanning into DevSecOps workflows, where continuous scanning and reporting can aid
in identifying security gaps before containers are deployed into production. The research ultimately advocates
for a holistic, continuous security strategy to minimize risks in containerized environments. [1]

2.2 ZeroDVS: Trace-Ability and Security Detection of Container


Image Based on Inheritance Graph
Yan Zheng’s research focuses on enhancing the traceability and security detection of Docker images through the
concept of image inheritance graphs. This method aims to provide a more structured and organized approach
to monitoring and securing container images by representing their lineage in a graph format. The inheritance
graph maps out the relationships between images, showing how base images are inherited and modified over
time. This visual representation makes it easier to trace the origin of a particular image, allowing for better
vulnerability detection and historical tracking of image changes. Zheng’s approach involves integrating image
inheritance graphs with vulnerability scanning tools, enabling a more thorough security assessment. The scan-
ning modules examine the security posture of images based on their lineage, identifying vulnerabilities that may
arise from the use of outdated or compromised base images. One of the advantages of this method is its ability

3
to detect risks that could otherwise go unnoticed in traditional scanning approaches, particularly those associ-
ated with inherited vulnerabilities from parent images. The research also considers the practical implications
of resource optimization in containerized environments. By sharing common layers across related images in the
inheritance graph, storage and bandwidth usage can be significantly reduced, making image deployment and
updates more efficient. However, Zheng acknowledges several challenges in implementing this approach. Main-
taining an accurate and up-to-date inheritance graph can be difficult, especially in environments with frequent
image updates or large-scale container deployments. Synchronizing image metadata across multiple systems
requires robust automation tools to ensure the accuracy of the graph and prevent discrepancies that could lead
to missed vulnerabilities. Furthermore, managing dependencies and ensuring that every update or change is
properly recorded in the graph presents a logistical challenge in fast-paced development environments. Despite
these challenges, Zheng’s research demonstrates the potential of using structured image metadata to improve
container security by addressing the root causes of vulnerabilities. In conclusion, Zheng’s work highlights the
need for more sophisticated tools that not only detect vulnerabilities in containers but also enhance traceabil-
ity and forensic analysis capabilities, which are essential for maintaining security in dynamic and large-scale
containerized systems. [2]

2.3 Vulnerability Detection and Classification using DevSecOps


This study presents an innovative approach to improving the security of Internet of Things (IoT) devices by
integrating machine learning techniques into vulnerability detection and classification processes. With the
growing number of IoT devices, ensuring the security of these devices has become a significant challenge due
to their diverse nature and the rapid pace of technology adoption. This paper proposes integrating DevSecOps
practices into the Docker image lifecycle to enhance vulnerability detection and mitigation throughout the
development pipeline. DevSecOps aims to combine development, security, and operations to build security
directly into the development process, ensuring that security is not an afterthought but a fundamental part of
the design and deployment phases. The study emphasizes the importance of automating vulnerability scans
and integrating them into continuous integration and continuous deployment (CI/CD) pipelines to identify and
address vulnerabilities early in the development lifecycle. One of the key benefits of this approach is the proactive
identification of insecure Docker images before they are deployed into production, reducing the risk of exposing
IoT devices to potential exploits. The paper also discusses the application of machine learning algorithms for the
classification of vulnerabilities based on various features extracted from the Docker image metadata. By training
models on large datasets of previously identified vulnerabilities, the approach can classify and predict new
vulnerabilities with a high degree of accuracy. This machine learning-based classification system can enhance
traditional vulnerability scanning tools by providing more dynamic and context-sensitive results. However,
implementing machine learning in this domain presents some challenges, including the need for large, labeled
datasets and the difficulty of interpreting machine learning model results. Additionally, the study discusses
the importance of collaboration between development and security teams in adopting DevSecOps practices.
A holistic approach to security is critical to ensuring that vulnerabilities are detected early and mitigated
effectively. The paper concludes that while machine learning holds great potential for improving vulnerability
detection in IoT devices and Docker images, it must be combined with strong development practices and security
protocols to create a robust security framework. The research advocates for a shift toward more integrated and
automated security measures in the IoT and containerized environments. [3]

2.4 A Study on Container Vulnerability Exploit Detection


This paper explores the detection of container vulnerabilities, focusing on how potential exploits can be identified
and mitigated in containerized environments. The research discusses the architecture of a platform designed
to help detect and address vulnerabilities in containerized applications. The platform incorporates several
important features, including user authentication, product categorization, inventory management, and secure
payment gateways that comply with healthcare regulations. The study emphasizes how containerization can
help maintain high levels of performance and security, especially in industries like healthcare, where sensitive
data must be protected at all costs. The paper delves into the implementation details of the platform, discussing
the technologies used for both frontend and backend development as well as database management. It highlights
how these technologies enable secure and efficient handling of data, providing a user-friendly interface while also
ensuring compliance with relevant regulations, such as HIPAA in the healthcare sector. The system’s design is
scalable, accommodating large numbers of users and products without compromising performance. The paper

4
also addresses the need for rigorous vulnerability detection mechanisms to identify potential risks associated
with containerized environments. It emphasizes the importance of vulnerability scanning tools and how they can
be incorporated into the container lifecycle to continuously monitor for known security issues. By integrating
these tools into the development and deployment process, the risk of introducing security flaws into the system
can be significantly reduced. Moreover, the research discusses the challenges faced in the secure deployment of e-
commerce platforms for healthcare products. These include managing sensitive user information, ensuring secure
transactions, and complying with industry standards for privacy and security. The paper concludes by stressing
the importance of secure and well-architected systems that balance usability with strong security measures,
particularly in industries like healthcare where data integrity and patient confidentiality are paramount. It
serves as a valuable reference for developers looking to implement secure e-commerce solutions in regulated
sectors. [4]

2.5 Security Audit of Docker Container Images in Cloud Architec-


ture
The paper discusses the integration of Docker containers within cloud architectures and focuses on the secu-
rity audits of container images in cloud environments. As cloud computing continues to grow, ensuring the
security of containers becomes critical in maintaining the integrity of applications deployed in the cloud. The
paper begins by explaining the role of edge computing in reducing network congestion and improving data
processing efficiency. By processing data locally, edge computing helps enhance the responsiveness of applica-
tions and reduces the dependency on centralized cloud infrastructure. This concept is particularly beneficial
for applications requiring real-time data processing, such as IoT devices, autonomous vehicles, and smart city
infrastructure. Furthermore, the paper explores the application of machine learning algorithms at the edge
to optimize data analysis, which further improves the overall efficiency of cloud architectures. The study also
examines the role of blockchain technology in ensuring the security and transparency of transactions in edge
computing environments. Blockchain’s decentralized ledger provides a secure and tamper-resistant method of
recording transactions, ensuring the integrity of data and applications. This is especially crucial in distributed
systems where trust between nodes needs to be established. The research discusses how blockchain can be
leveraged to enhance data security and ensure that all interactions within the edge computing network are fully
auditable and transparent. The paper concludes by addressing the challenges and future research directions in
the fields of edge computing, machine learning, and blockchain. It points out that while these technologies offer
great potential for improving security and efficiency, their integration poses technical and logistical challenges
that must be addressed in future studies. In particular, the need for standardized frameworks and protocols
to facilitate the seamless integration of these technologies remains a key area for further exploration. The
research highlights the potential for innovation in edge computing applications, especially in domains like IoT,
autonomous systems, and smart cities, where distributed systems and real-time data processing are crucial. [5]

2.6 A Hybrid Model for Real-Time Docker Container Threat De-


tection and Vulnerability Analysis
The paper titled ”A Hybrid Model for Real-Time Docker Container Threat Detection and Vulnerability Anal-
ysis,” published in the International Journal of Intelligent Systems and Applications in Engineering, addresses
the pressing issue of container security, which has become a significant concern for organizations adopting
microservices and cloud computing technologies. The complexity and volume of security data generated by
network devices, servers, and applications pose challenges for timely and effective analysis, often leading to
delays in responding to security incidents. To mitigate these challenges, the authors propose a hybrid model
that combines various open-source tools for real-time threat detection and vulnerability analysis in Docker con-
tainer environments. The model aims to shorten the time between the occurrence of security incidents and
their detection, allowing for swift identification of compromised systems and timely mitigation of threats. The
proposed solution involves the collection, processing, and presentation of relevant security information through
dashboard displays, facilitating real-time monitoring and response. The paper conducts a comprehensive review
of existing literature on container security and solutions, highlighting the widespread acceptance of containers
as a standardized method for deploying microservices. Despite their advantages, containers introduce secu-
rity concerns that need to be addressed to ensure robust protection against sophisticated cyber-attacks. By

5
implementing the hybrid model, organizations can enhance their ability to detect security anomalies, thereby
improving their overall security posture and reducing the risk of breaches in containerized environments.
[6]

2.7 Native Container Security for Running Applications: A Review


The paper titled ”Native Container Security for Running Applications: A Review” presents a snapshot anal-
ysis of containerization solutions and their security components. It investigates technologies like LXC, LXD,
Singularity, Docker, Kata-containers, and gVisor, focusing on isolation capabilities derived from namespaces,
hypervisor/kernel isolation, network management (such as bridges and bandwidth limits), and storage options
(directory, SIF, BTRFS, LVM, ZFS, OverlayFS). Additionally, the security measures in place, including cgroups,
capabilities, seccomp, and AppArmor, are analyzed. The paper concludes that Kata-containers provide the best
isolation due to their use of hypervisors, while LXD stands out for offering the best network and storage fea-
tures combined with a decent level of security. However, the study has certain limitations. It does not include
performance comparisons and is therefore purely static. Moreover, it does not address container scheduling or
the security of applications running within containers. The study deals primarily with default settings, which
are often customized in practice. Technologies discussed in the paper include namespaces, control groups, capa-
bilities, seccomp, AppArmor, hypervisors, Linux bridges, Open vSwitch, and various file systems like BTRFS,
LVM, ZFS, and OverlayFS, along with tools such as Docker, LXC, LXD, Singularity, Kata-containers, gVisor,
runc, kata-runtime, runsc, and OCI specifications.
[7]

2.8 Last Line of Defense: Towards Certifying Dependable SCADA


Networks by Fostering Threat Hunting with Deception
The paper titled ”Last Line of Defense: Towards Certifying Dependable SCADA Networks by Fostering Threat
Hunting with Deception” presents a research initiative that explores the use of deception and the attack kill
chain for anticipatory threat hunting in SCADA networks. The approach involves creating a ”baiting farm”
consisting of mimicked SCADA components and lures, which attract attackers and encourage them to reveal
their tactics, techniques, and procedures (TTPs) while threat hunters collect indicators of compromise (IOCs).
This collected information is then leveraged to refine threat detection and prevention strategies within real
SCADA network environments. The primary goal of this research is to identify previously unknown threats that
reactive security measures alone cannot counter. However, the effectiveness of the proposed decoy farm depends
largely on its authenticity. More advanced attackers may be able to distinguish the decoy systems from real
ones. Additionally, the paper lacks details regarding the specific procedures used to evaluate attacker behavior
within the decoy farm and how this data is utilized to enhance defensive measures. Moreover, the operational
processes for deploying and managing the decoy farm could be resource-intensive and cumbersome. [8]

2.9 Secure Inter-Container Communications Using XDP/eBPF


The paper titled ”Secure Inter-Container Communications Using XDP/eBPF” addresses security concerns in
container networks, particularly the limitations of managing access through IPs and application layer inspec-
tion. The authors propose Bastion+, a micro-level security solution that enforces stringent network security on
a per-container basis. Bastion+ utilizes XDP/eBPF to construct a security stack for each container, ensuring
that traffic is allowed only between specific container pairs, and it supports the chaining of select security ca-
pabilities. Additionally, the paper introduces a security policy assistant, which aids administrators in creating
and fine-tuning network policies to enhance security in container environments. However, Bastion+ introduces
container network enhancements that may not be feasible in all settings. The performance of security function
chaining is dependent on the efficiency of the chained functions. Moreover, the security policy assistant is a
recommendation rather than an automated feature, requiring manual intervention by administrators to config-
ure and manage network policies. The technologies discussed in this paper include containers, microservices,
Docker, Kubernetes, container network interface (CNI) plugins (such as Flannel, Weave, Calico, and Cilium),
iptables, network namespaces, XDP, eBPF, security policies in container networks, the implementation of secu-
rity function chaining, and API-integrated access control.
[9]

6
2.10 Towards an Understanding of Docker Images and Performance
Consequences on Container Storage Systems at Scale
The paper titled ”Towards an Understanding of Docker Images and Performance Consequences on Container
Storage Systems at Scale” investigates the storage characteristics of Docker Hub image data, processing an
unprecedented dataset of 167 Terabytes when uncompressed. The study focuses on the layers and image
dimensions, compression techniques, file formats, and folder organization. Among the key findings is a significant
level of file redundancy, with 97% of files being duplicates. The paper also evaluates different container storage
drivers and their performance, particularly in handling small I/O requests and copy-on-write (CoW) operations.
However, the dataset analyzed in this study is a date-sliced sample and may not fully represent the current
state of Docker Hub. Additionally, the performance evaluation is limited to small I/O operations and may
not be applicable to all types of workloads run within containers. The study does not explore the impact of
newer storage technologies or enhanced deduplication methods. The technologies discussed in this paper include
containers, Docker, Docker Hub, Docker registries, container layers, storage systems, container storage drivers
(overlay2, devicemapper, zfs, btrfs), Copy-on-Write (CoW), file-level deduplication, compression techniques
(Gzip, pigz, Lz4), tmpfs, SSD, HDD, fio, dd, Docker archive, and tar.
[10]

2.11 Hyperion: Impact Hardware High-Performance and Secure


System for Container Networks
The paper titled ”Hyperion: Impact Hardware High-Performance and Secure System for Container Networks”
introduces Hyperion, a smartNIC-based network architecture designed to offload container networking to hard-
ware. Hyperion leverages Data plane On-Board-Services on the smartNIC for tasks such as service address
translation mapping, network isolation, and Layer 7 (L7) processing. Additionally, it features a dynamic op-
timization mechanism that adapts to changes in the container environment. The evaluation demonstrates
Hyperion’s superior performance when compared to software-based approaches. Despite its advantages, Hy-
perion has limitations. It requires the presence of smartNICs, which may not be available in all deployments.
Moreover, Hyperion’s scalability is constrained by the capabilities of the smartNIC, and the paper does not
address security concerns related to the smartNIC or the communication between the smartNIC and the host.
The technologies discussed in this paper include containers, microservices, container networking for Kubernetes,
smartNICs, Network Data Plane (NDP), network virtualization for Kubernetes, access control lists (ACLs) for
containers, L7 processing, network processing for containers, data plane for containers, SR-IOV, VXLAN, ex-
tended BPF, iptables, Kubernetes, Docker, Cilium, Istio, Flannel, Nginx, Memcached, wrk2, and mpstat. [11]

2.12 A Hybrid System Call Profiling Model for Container Protec-


tion
The paper titled ”A Hybrid System Call Profiling Model for Container Protection” introduces a novel approach
to enhance container security by reducing the availability of system calls. The authors present a detailed
analysis of container images to identify potentially needed system calls, alongside dynamic tracking of system
calls used during container boot, active operation, and shutdown. The goal is to create highly granular seccomp
profiles that minimize the attack surface while ensuring container operability. However, this approach has
certain limitations. Dynamic profiling may fail to capture all system calls required for specialized or novel
applications, and static analysis may result in false positives. Additionally, the model does not address attacks
that bypass system calls entirely or exploit allowed system calls. Technologies involved in this paper include
Linux containers, Docker, system calls, libc, seccomp, seccomp filters, BPF, the prctl() system call, dynamic
tracking, static analysis, objdump, ldd, strace, KProbes, and Dockerfile-based security measures. [12]

2.13 Cloud Migration Research: A Systematic Review


The paper titled ”Cloud Migration Research: A Systematic Review” offers a systematic literature review (SLR)
on cloud migration research. The authors analyzed 23 publications from 2010 to 2013, categorizing them by
method, technique, and solutions for transitioning legacy systems to cloud platforms. The paper introduces a

7
Cloud-RMM (Cloud-Reference Migration Model) to organize the research by different processes and concerns,
providing a structured view of cloud migration studies. The findings indicate a growing yet limited body of
research on cloud migration and highlight the absence of a standard migration framework. However, there are
limitations in the scope of this review. It is restricted to publications from 2010 to 2013, potentially excluding
more recent advancements. The review primarily focuses on legacy-to-cloud migration, omitting other types
of cloud migrations. Additionally, it excludes industry practices that have not been documented in scholarly
literature. The paper employs technologies and methodologies including various cloud computing platforms (as
discussed in the reviewed papers), operational systems, and the proposed Cloud-RMM for categorizing cloud
migration processes. [13]

2.14 SysFlow: Towards a Programmable Zero Trust Security Archi-


tecture for Systems
The paper titled ”SysFlow: Towards a Programmable Zero Trust Security Architecture for Systems” introduces
SysFlow, a new programmable system security platform designed to offer zero-trust, unified, dynamic, and
resource granular security control capabilities. SysFlow defines a system flow abstraction for describing system
activities across an infrastructure and creates a system-level division between the data and control planes.
The SysFlow Controller (SC) acts as the Policy Decision Point (PDP), while the SysFlow Data Plane (SDP)
functions as the Policy Enforcement Point (PEP). With programmable APIs, SysFlow enhances system security
by introducing micro-segmentation and risk visibility, supporting a zero-trust approach to securing systems. The
platform has several limitations. It operates only on the Linux operating system and relies on the trust of kernel
and SysFlow components. Additionally, there may be slight performance degradation, and SysFlow does not
fully address aspects of Zero Trust such as continuous authentication. Technologies used in the paper include
Linux kernel modules and the supported system calls, optional kernel features such as full eBPF (Extended
Berkeley Packet Filter), Java (for the SysFlow Controller and security applications), and C (for the SysFlow
Data Plane implementation). Docker was used for container-based experiments, and various benchmarking
utilities such as LMBench, ApacheBench, and sysbench were employed. [14]

2.15 Container Cloud Security Vulnerability: An Empirical Analy-


sis of Security Risks Associated with Information Leakages
The paper titled ”Container Cloud Security Vulnerability: An Empirical Analysis of Security Risks Associated
with Information Leakages” provides a comprehensive survey of information leak sources in container clouds.
The authors identify and discuss various leakage points through which system-wide host information becomes
accessible to containers with poor resource isolation. The paper demonstrates how such leakages can be exploited
for malicious purposes, such as prying into private information, identifying individuals residing in the same
dwelling, constructing hidden channels, and engaging in advanced cloud-based attacks. The research has some
limitations. It does not cover all potential leakage channels in container environments, and the experiments
were conducted in somewhat artificial settings. Consequently, the results may not fully reflect real-world
conditions. Technologies discussed in the paper include Docker and Linux containers, along with the Linux
Kernel Module and Linux system calls. The study also mentions various benchmarking tools such as LMBench,
ApacheBench, and sysbench, and cloud computing platforms. Additionally, eBPF (Extended Berkeley Packet
Filter) is highlighted. [15]

2.16 Ambush from All Sides: Analyzing Security Threats for Open-
Source SWE CI/CD Pipelines
The paper titled ”Ambush from All Sides: Analyzing Security Threats for Open-Source SWE CI/CD Pipelines”
conducts a large-scale measurement and comprehensive analysis of security threats within open-source software
(OSS) CI/CD pipelines. The authors examine over 322K GitHub repositories with CI/CD configurations to
identify various security concerns and vulnerabilities. They also perform practical attacks to validate their
model and provide recommendations to enhance CI/CD security. The study has some limitations. Although
the data size of 324,672 repositories is significant, it is not the largest dataset in comparison to related works.
Furthermore, the authors did not conduct attack case studies using actual, exposed repositories for ethical

8
reasons, though they replicated these attacks precisely. Technologies used in the study include the GitHub
REST API for data acquisition, CIAnalyser (a tool for parsing CI/CD scripts and pipelines), and various tools
such as Node.js and Docker for CI/CD script execution. The analysis of continuous integration and delivery
script files was performed using JavaScript and YAML. The study also references security knowledge repositories
like NVD and CVE for discovering existing security issues in CI/CD scripts. [16]

2.17 Condo: Enhancing Container Isolation Through Kernel Per-


mission Data Protection
The paper titled ”Condo: Enhancing Container Isolation Through Kernel Permission Data Protection” presents
Condo, a solution designed to enhance container isolation by protecting kernel permission data. Containers,
implemented as lightweight virtualization, rely on kernel access control mechanisms for isolation. However,
existing protective measures like namespaces and cgroup systems are vulnerable to memory corruption attacks,
particularly non-CTL-Flow-Data attacks. Condo addresses this issue by employing a shallow, fast, and simple
kernel data protection strategy, independent of control flow. The solution is designed to catch all accesses to
kernel permission data (KPDs) through kernel exceptions, ensuring lightweight and complete protection. Condo
integrates seamlessly with pre-existing container platforms such as Docker and Kubernetes. Some limitations
of the approach include performance overhead, with container startup time increasing by 5-9% and workloads
experiencing up to 8% additional overhead. The complexity of modifying kernel data is another challenge, as
various types of kernel data require protection, necessitating injections into remote kernel functions. Addi-
tionally, Condo’s scope is focused on non-control flow data attacks, leaving other attack types to be addressed
by different solutions. The solution is also dependent on Trusted Execution Environments (TEEs) and is lim-
ited to specific hardware platforms. Technologies used in the study include ARM TrustZone for setting up
TEEs, modified Linux Kernel v4.19.35, and Docker v20.10.17. The dataset consists of 31 released kernel CVEs,
including privilege escalation vulnerabilities such as CVE-2017-5123 and CVE-2021-42008, and one vulnerabil-
ity related to speculative execution. Protection mechanisms employed in Condo include page fault exception
handling, pointer protection, and a polling mechanism to detect manipulations in permission pointers. The pa-
per evaluates performance by measuring Docker container startup times and specific application transactional
metrics. [17]

2.18 DIVDS: Docker Image Vulnerability Diagnostic System


The paper titled ”DIVDS: Docker Image Vulnerability Diagnostic System” introduces DIVDS, a newly developed
system aimed at addressing Docker environment security issues by diagnosing vulnerability risks in Docker
images during upload and download processes. DIVDS consists of two main modules:

ˆ IVD (Image Vulnerability Detection): Identifies vulnerabilities from Docker image metadata and
cross-checks them with vulnerability databases such as Ubuntu CVE Tracker and RedHat Security Data.

ˆ IVE (Image Vulnerability Evaluation): Assesses vulnerability risks by calculating a vulnerability


value based on analyzed threats and compares it with a user-defined tolerance level to determine if the
image is safe to use.

DIVDS improves security by preventing the distribution of potentially unsafe Docker images and providing a
structured assessment method. The evaluation demonstrated its effectiveness in identifying vulnerabilities in
commonly used Docker images such as ubuntu:latest and java:latest. The system does have limitations, including
reliance on static analysis, which limits its ability to detect runtime deviations. Additionally, it has a whitelist
feature that excludes specific vulnerabilities from evaluation, which may undermine security assessments in
critical cases. The system also depends on user-defined thresholds, which may affect the computed vulnerability
level and reliability. Furthermore, while DIVDS performs well with individual images, scalability concerns
arise when handling large repositories that are frequently updated, and the system may slightly impact Docker
image processing time due to added evaluation steps. Tools used in the study include Clair, a software tool for
scanning vulnerabilities, Docker Engine for image management, and various security databases such as Ubuntu
CVE Tracker and RedHat Security Data. The system employs the Common Vulnerability Scoring System
(CVSS) for risk assessment and uses inactive deep scanning of OS and Docker image package information.
DIVDS has been tested in an x86 64 CentOS 7 environment. [18]

9
2.19 Malicious Investigation of Docker Images on Basis of Vulnera-
bility Databases
The paper titled ”Malicious Investigation of Docker Images on Basis of Vulnerability Databases” explores
the main vulnerabilities present in Docker images, focusing on the distinction between official and community
images. The research reveals a concerning 66% exposure rate to potentially critical vulnerabilities in community
images due to outdated software and lack of updates. The analysis was performed using Trivy, a tool that scans
Docker images based on various vulnerability databases for different operating systems. K-means clustering was
applied to categorize threat levels based on image types, with the elbow method used to determine the optimal
number of clusters. The study has several limitations. Firstly, Trivy depends solely on selected vulnerability
databases, meaning some vulnerabilities may go unnoticed if not included in those databases. The research is
also limited to Docker images, which may reduce its applicability to other container platforms or environments.
Furthermore, the static nature of the analysis means some vulnerabilities can only be detected in still captures
and may not be found during active operation or configuration changes. Inconsistent updates, especially in
community images that have not been updated for over a year, skewed the number of identified vulnerabilities.
Additionally, the absence of standardized thresholds for determining ”maliciousness” made it challenging to
interpret results consistently. Technologies employed include Trivy for static vulnerability scanning, with data
sourced from Docker Hub, which classifies images as official or community. The threat level categorization was
performed using K-means clustering, with the optimal number of clusters determined via the elbow method.
The vulnerability severity was evaluated using the Common Vulnerability Scoring System (CVSS). [19]

2.20 Monitoring Solution for Cloud-Native DevSecOps


The paper titled ”Monitoring Solution for Cloud-Native DevSecOps” proposes an architectural solution and
implementation for an automated monitoring system in cloud-native DevSecOps environments. This solution
addresses the gap in the homogeneity of monitoring frameworks for both infrastructure and applications. It
leverages open-source tools to provide real-time health metrics, log analysis, and alerting, while ensuring scala-
bility, agility, and enhanced security.

The study acknowledges several limitations. Firstly, the proposed framework’s efficiency has not yet been
evaluated through experimental results. The agent-based approach used in the system relies on data collected
from agents within the computing system, which limits its applicability to serverless or transient resources.
Moreover, the reliance on open-source components may lead to integration and compatibility challenges. Addi-
tionally, the visualization of complex databases and applications in Grafana may require optimization.

Technologies used in the proposed framework include Docker for container management, Traefik for communica-
tion, and tools like Prometheus and Grafana for system-level monitoring. Application monitoring is performed
using ELK (Elasticsearch, Logstash, and Kibana) for log collection and analysis, while log data extraction from
worker nodes is handled by Filebeat and Logstash. The framework’s modularity and scalability are achieved
through its microservices architecture. The system is deployed on the cloud-native OpenStack platform, with
real-time alerts generated using Python scripts. [20]

2.21 Reflections on Trusting Docker: Invisible Malware in Contin-


uous Integration Systems
The paper titled ”Reflections on Trusting Docker: Invisible Malware in Continuous Integration Systems” in-
vestigates the security risks and trust concerns within Docker-based CI systems. It highlights the potential for
malware to backdoor CI systems, persist through updates, and compromise production artifacts. The research
focuses on self-hosted architectures and presents a proof-of-concept (PoC) targeting GitLab CI, demonstrating
how covert channels for malware updates can be exploited. The challenge in detecting these threats is empha-
sized, particularly as the malware does not leave traces in the source repositories.

The study’s limitations include the difficulty of detecting self-propagating malware using static analysis or
peer review methods. The research is primarily focused on Docker-based CI systems, which limits the appli-
cability of the findings to other CI structures. The demonstrations require specific CI configurations such as

10
Docker-in-Docker setups, which could pose resource challenges. Additionally, mitigation strategies like repro-
ducible builds would require a significant shift in CI processes and have yet to be widely implemented.

The technology used in the PoC involves a self-generating virus exploiting the Docker client interface. The
experiment was carried out in a GitLab CI environment with a custom Docker image. The methodology fo-
cused on the CI container life-cycle and targeted dependency injection to create covert channels for control
and updates. Malicious payloads include the insertion of Trojans in pre-release binaries, the theft of sensitive
information, and network hopping. [21]

2.22 Security Analysis of Docker Containers for ARM Architecture


The paper titled ”Security Analysis of Docker Containers for ARM Architecture” evaluates the security risks
of Docker containers specifically within ARM-based environments, which are critical for IoT and edge comput-
ing devices. The authors used tools such as Trivy, Clair, Snyk, and JFrog to analyze official Docker images
for vulnerabilities and compare their performance over time. The study also conducted dynamic analysis on
Raspberry Pi devices, providing real-world examples of security issues. The findings underscore the importance
of security in ARM-based Docker containers, particularly for edge devices. The research faced limitations with
certain tools, like Clair and JFrog, which are not compatible with ARM-based engines, affecting the efficiency
of the scanning process. Dynamic analysis was also constrained by limited resources to observe system calls on
ARM architectures during runtime. Furthermore, the study was limited to official ARM images due to the size
of the dataset and practical constraints. Temporal limitations exist as the vulnerability values were calculated
at only two different points in time, not reflecting the continuous evolution of vulnerabilities. The analysis
relied on several scanning tools to identify CVEs, including Trivy, Clair, Snyk, and JFrog. Dynamic testing
was conducted on Raspberry Pi 4B+ devices, targeting vulnerable systems running Kubernetes on K3S. The
severity of identified vulnerabilities was assessed using CVSS, and the focus was on both unique and non-unique
vulnerabilities present in Linux distributions like Debian and Ubuntu. [22]

2.23 Security Audit of Docker Container Images in Cloud Architec-


ture
The paper titled ”Security Audit of Docker Container Images in Cloud Architecture” introduces a security audit-
ing methodology for Docker container images used in cloud environments. The authors propose a Vulnerability-
Centric Analysis (VCA) of Docker images and align it with the OWASP Container Security Verification Stan-
dards and NIST SP 800-190 standards. Key findings point to significant risks associated with outdated packages,
incorrect settings, and inadequate security measures, which affect the overall security posture of cloud-native
applications. The study’s scope is somewhat narrow, focusing specifically on Docker image vulnerabilities and
not covering container orchestration tools. The heavy reliance on the Google Container Registry’s API for
vulnerability scanning limits the generalizability of the results. The study is also biased towards static analysis
and does not offer sufficient solutions for dynamic or runtime security challenges. Additionally, while traditional
verification methods correlate well with security measures in cloud-native models, they may not be fully adapt-
able to newer paradigms. The authors propose a Vulnerability-Centric Approach (VCA) for image security
vulnerability assessment. The scanning process utilized the Google Container Registry Vulnerability Scanning
API, and the findings are aligned with the OWASP Container Security Verification Standards and NIST SP 800-
190 guidelines. The audit focuses on key use cases, including base image inspection, Dockerfile settings, image
signing, and authorization. The scanning categories cover major classes such as operating systems, databases,
programming languages, and web/application platforms. [23]

2.24 Should You Upgrade Official Docker Hub Images in Production


Environments?
This paper examines the impact of enabling production accounts for official Docker Hub images. It investigates
package changes across 37,000 images from 158 official repositories. The study reveals that issues arising from
package changes are primarily related to utility package functionality, which can lead to poor performance and
security risks. The research advises against in-place updates of Docker images in production environments,

11
recommending thorough testing before deployment to ensure reliability and security. The study’s methodology
involved manually identifying repository branches, which may introduce inconsistencies. Furthermore, the focus
is solely on official repositories, excluding community or verified repositories. The research only considers native
(OS), Node.js, and Python packages, omitting R or Java packages. The assumption of semantic versioning is
applied uniformly, though not all packages strictly follow this model. Additionally, five repositories with unclear
versioning or branching were excluded from the analysis. The study leverages Docker Hub for data collection,
using Docker CLI for image analysis and metadata identification. Package changes were categorized as major,
minor, or patch based on semantic versioning, with statistical analysis (standard deviation and variance) applied
to understand the nature of the package upgrades and downgrades across repository categories. The dataset
includes over 37,000 image tags crawled from 158 official repositories. [24]

2.25 The Practice and Application of a Novel DevSecOps Platform


on Security
This paper introduces a self-designed DevSecOps platform developed by China Telecom, aimed at ensuring
secure and efficient software release cycles. The platform integrates security features into the DevOps pipeline
through its Security Center, offering capabilities like permission management, scanning engines, and defect
reporting. It supports code quality checks and compliance security audits throughout the pipeline, enabling in-
dependent security scans. Currently, the platform serves over 62,000 users, supports more than 10,000 projects,
and analyzes over 4.5 billion lines of code. The static scanning capabilities are limited compared to pipeline
scanning, especially in terms of language support and model accuracy. Server mode scanning faces challenges
related to elastic scalability and repository token security. False positives from static analysis tools require
manual verification, and the implementation of DevSecOps demands significant cultural changes and extensive
training. The platform utilizes static code scanners like SonarQube, Fortify, and CodeSec, along with software
component analysis tools provided by Haiyunan and Woodpecker. Two scanning modes are employed: pipeline
scanning for high-accuracy post-build analysis, and independent scanning for standalone flexibility. The plat-
form is deployed across two resource pools (Guangzhou and Guizhou) in a client-server configuration. Risk
classification is applied, categorizing defects into critical, high, medium, and low-risk categories. [25]

2.26 Vulnerability Analysis of Docker Hub Official Images and Ver-


ified Images
This study examines the security vulnerabilities in both official and verified Docker Hub images using four
widely recognized open-source image scanning tools: Anchore, Aqua Trivy, Docker Scan, and JFrog Xray.
The analysis focuses on common images like Ubuntu, Nginx, MySQL, MongoDB, and Postgres, comparing
the vulnerabilities across these images. The findings indicate that official images are generally less prone to
security threats compared to verified images, making them safer for use. Additionally, Aqua Trivy emerged
as the most effective tool for vulnerability detection, while Docker Scan proved to be the least effective. A
key limitation of the study is tool inconsistency, as the number and severity of vulnerabilities detected varied
between the tools used. The analysis was limited to five official and five verified Docker images, which restricts
the generalizability of the results. There was also confusion regarding the severity of the issues discovered, as
different tools categorized vulnerabilities with varying levels of severity. Furthermore, the study did not consider
premium tools that could have potentially offered more precise results. Lastly, outdated data sources, such as
old libraries, contributed to some of the risks identified. The study employed several open-source scanning tools,
including Anchore, Aqua Trivy, Docker Scan, and JFrog Xray, to detect vulnerabilities. The analysis categorized
vulnerabilities into critical, high, medium, and low risk levels. It analyzed the top five official Docker images
(Ubuntu, Nginx, MySQL, MongoDB, and Postgres) and their corresponding verified images based on download
counts and update frequency. Vulnerabilities were primarily sourced from the CVE and NVD databases, along
with advisories from software vendors and the broader community. [26]

12
Chapter 3

Requirement Specification

3.1 Hardware Requirements


3.1.1 Development Environment
ˆ Processor: Intel Core i5 or equivalent

ˆ RAM: 8 GB minimum (16 GB recommended)

ˆ Storage: 256 GB SSD minimum (512 GB recommended)

ˆ Graphics: Integrated or dedicated graphics card supporting OpenGL

ˆ Network: High-speed internet connection

3.1.2 Deployment Environment


ˆ Server: Cloud-based or on-premises server with Docker support

ˆ Processor: Intel Xeon or equivalent

ˆ RAM: 16 GB minimum (32 GB recommended)

ˆ Storage: 512 GB SSD minimum (1 TB recommended)

ˆ Network: High-speed internet connection

3.2 Software Requirements


3.2.1 Development Tools
ˆ Operating System: Windows 10/11, macOS, or Linux

ˆ IDE: Visual Studio Code, IntelliJ IDEA, or PyCharm

ˆ Version Control: Git and GitHub

ˆ Containerization: Docker and Docker Compose

ˆ CI/CD: Jenkins, GitLab CI, or GitHub Actions

ˆ Vulnerability Scanning Tools: Aqua, Trivy, Anchore, JFrog Xray

ˆ Monitoring and Logging: Prometheus, Grafana, and Sysdig

13
3.2.2 Deployment Tools
ˆ Container Orchestration: Kubernetes or Docker Swarm

ˆ Cloud Services: AWS, Azure, or Google Cloud Platform

ˆ Database: PostgreSQL, MySQL, or MongoDB

ˆ Web Server: Nginx or Apache

3.3 Functional Requirements


ˆ The system shall perform vulnerability scans on Docker images.

ˆ The system shall integrate with CI/CD pipelines to automate vulnerability detection.

ˆ The system shall generate detailed reports of identified vulnerabilities.

ˆ The system shall notify the development team of critical vulnerabilities.

ˆ The system shall support multiple vulnerability scanning tools for comprehensive analysis.

ˆ The system shall allow configuration of scanning frequency and thresholds.

ˆ The system shall provide a user-friendly interface for managing and reviewing scans.

3.4 Non-Functional Requirements


ˆ Performance: The system shall perform vulnerability scans within a reasonable time frame, not exceed-
ing 10 minutes for an average Docker image.

ˆ Scalability: The system shall handle multiple simultaneous scans and scale with increased workloads.

ˆ Reliability: The system shall maintain high availability with an uptime of 99.9%.

ˆ Usability: The system shall have an intuitive interface for ease of use by developers and security teams.

ˆ Security: The system shall ensure secure handling of Docker images and scan results.

ˆ Compatibility: The system shall be compatible with various Docker image formats and CI/CD tools.

ˆ Maintainability: The system shall be modular and easy to update with new vulnerability scanning tools
and features.

14
Chapter 4

Framework and System Design

4.1 Actual Scenario

Figure 4.1: Flow chart illustrating the actual scenario

The flow chart illustrates the security risks associated with Docker images in the context of a continuous
integration and deployment pipeline. Developers frequently upload Docker images to public or private registries
like DockerHub, which might contain various types of content. These images could include:

ˆ Normal Content: These are legitimate, safe, and thoroughly tested software applications that do not
pose any security risk. This content is intended for use in production environments without any concerns
regarding vulnerabilities or malicious code.
ˆ Vulnerable Content: Some images may contain software with known security vulnerabilities. These
vulnerabilities could be a result of outdated libraries or flaws that have been discovered post-deployment.
These vulnerabilities might not be intentionally introduced, but they pose significant risks to the systems
that run these images.
ˆ Malicious Content: Malicious Docker images contain harmful code or even backdoors that attackers
can use to compromise the system. These images are crafted intentionally to infect the host system, steal
data, or provide unauthorized access. Such images can often be indistinguishable from legitimate ones
without detailed security analysis.

The primary threat comes from attackers who might introduce malicious images or inject harmful com-
mands into otherwise legitimate Docker images. Unaware users could unknowingly download and execute these
compromised images, leading to a breach of system integrity. The consequences can include data breaches,

15
unauthorized access, or system compromise. This situation underscores the critical importance of adopting ro-
bust security practices such as conducting automated scans of Docker images, only sourcing images from trusted
registries, and applying strict security policies at every stage of the development and deployment lifecycle.

16
4.2 Architecture

Figure 4.2: Architecture diagram explaining both layers structuring

The system architecture is designed to ensure that security practices are integrated into the entire software
development lifecycle. The architecture can be broken down into the following stages:

ˆ Code Commit
– Developers initiate the process by committing the source code to a version control system (such
as Git). This serves as the foundation for the entire CI/CD pipeline, ensuring that the code is
systematically versioned and tracked.
ˆ Build Pipeline
– Continuous Integration (CI) tools like Jenkins are employed to trigger the build process automatically
once code is committed. These tools ensure that any changes to the codebase are immediately
compiled and tested, with the aim of maintaining a high standard of code quality and functionality.
– The code is compiled, and automated tests are run to verify its correctness and stability. Successful
builds are stored in a repository, such as Nexus, for easy access and management.
ˆ Deployment Pipeline
– Continuous Deployment (CD) tools are responsible for deploying the application into the target
environment. This process ensures that the application can be reliably and continuously pushed to
production.
– Docker images, once built and tested, are pushed to a Docker registry (such as DockerHub) for
storage and further use.
ˆ Cloud Deployment
– The application is deployed to cloud platforms such as AWS, Google Cloud, or Azure, providing the
scalability and flexibility required for modern applications.
– Infrastructure as Code (IaC) tools, such as Terraform, are used to provision the necessary cloud
resources automatically. This ensures that the cloud environment is configured in a repeatable and
secure manner.

17
ˆ Layer 0: Security

– Security is integrated at multiple levels, including Static Application Security Testing (SAST), Dy-
namic Application Security Testing (DAST), and container security scans. These tools help identify
vulnerabilities early in the development cycle, reducing the risk of security breaches in the final
product.
ˆ Layer 1: DevOps Workflow

– The entire development, testing, and deployment process is automated, allowing for continuous mon-
itoring and logging. These automated processes not only improve efficiency but also enhance the
traceability and accountability of the system’s activities.

18
4.3 Layer Overview

Figure 4.3: Overview of base analysis and package analysis

(a) Base Analysis Overview: Docker images are scanned using various security tools like Grype, Snyk, and
Trivy. These tools apply different scanning profiles to identify potential vulnerabilities within the image.
– Unique Vulnerabilities: This profile identifies vulnerabilities that are present in the image layers
but are not duplicated across multiple layers. These vulnerabilities can often be more critical because
they are not widespread and might be overlooked by other scanning approaches.
– Max Vulnerabilities: This profile aims to detect the maximum number of vulnerabilities across the
entire image. It provides a comprehensive analysis but might result in false positives or over-reporting
in less critical scenarios.
– Balanced Profile: This profile strikes a balance between identifying unique vulnerabilities and
detecting a broader range of potential security issues. It provides a well-rounded analysis that helps
prioritize vulnerabilities based on their potential impact.
The results from these scans are then aggregated to generate a detailed base scan report, which includes
a b-score. This score provides an overall assessment of the image’s security based on the severity of
detected vulnerabilities, categorized as high, medium, or low severity.

(b) Package Analysis Overview: Each individual layer of the Docker image undergoes a separate scan,
allowing for a more granular analysis of potential vulnerabilities within each layer. This helps identify
and isolate problematic layers that could pose security risks, even if other layers of the image are secure.

19
– The security tools used for package analysis generate separate reports for each layer, providing specific
details about vulnerabilities detected within that layer. These reports are then combined into an
overall package scan report, which aggregates the findings from all layers.
– The final package scan report is summarized by a p-score, which offers an overview of the over-
all security status of the Docker image. Similar to the base scan report, the p-score categorizes
vulnerabilities into high, medium, or low severity levels, making it easier to prioritize remediation
efforts.

20
4.4 System Design
Our system design encompasses the entire lifecycle of Docker image security, from image creation and deploy-
ment to ongoing monitoring and vulnerability management. The design follows a comprehensive DevSecOps
framework, where security practices are integrated into every phase of the development and deployment cycle
to ensure continuous protection and risk management.

Figure 4.4: Overall System Design for Docker Image Security

Figure 4.4 illustrates the overall system design, providing an overview of how various components of the
DevSecOps pipeline work together to ensure the security of Docker images throughout their lifecycle. This
design includes integrated processes for scanning, testing, deployment, and monitoring, ensuring that security
vulnerabilities are identified and addressed at every step.

4.4.1 Phase 1: Infrastructure Setup


The first phase of the system design involves setting up the necessary infrastructure to support the development
and deployment process. This includes the following key components:

ˆ VPC (Virtual Private Cloud) Networking Layer Setup: Ensures that all communication between resources
within the infrastructure is secure and contained within a trusted environment.

ˆ Kubernetes Cluster Deployment: Provides a scalable and efficient platform for orchestrating Docker con-
tainers in a production environment.

ˆ Jenkins Integration: Automates the CI/CD pipeline by integrating with Jenkins, which handles code
builds, tests, and deployment processes.

ˆ SonarQube Configuration: Used for static code analysis to ensure that code quality, security, and main-
tainability are up to industry standards.

ˆ Nexus Deployment: Acts as the artifact repository, storing compiled code, Docker images, and other
components that are required for deployment.

ˆ Monitoring Solutions Implementation: Provides real-time insights into the health and performance of the
infrastructure and applications.

21
4.4.2 Phase 2: Private Git Repository
For efficient and secure source code management, we establish:
ˆ A secure private Git repository, ensuring that only authorized personnel have access to the source code.

ˆ Token-based authentication, which enhances security by providing more robust access controls.

ˆ A version control system to manage the history of code changes, enabling easy tracking and rollback when
necessary.

4.4.3 Phase 3: CI/CD Pipeline


The CI/CD pipeline is a critical component of the system, automating the process of building, testing, and
deploying applications. The pipeline follows these steps:
ˆ Tool Install: Sets up the environment to build Java applications and prepare the system for integration
with other components.
ˆ Git Checkout: Fetches the latest version of the source code from the private Git repository.

ˆ Compile: Uses Maven to compile the Java code and prepare it for deployment.

ˆ Test: Runs automated tests to ensure the code’s stability and functionality.

ˆ File System Scan: Utilizes Trivy to detect any vulnerabilities in the file system before deployment.

ˆ SonarQube Analysis: Analyzes the code quality, security, and maintainability of the application using
SonarQube.
ˆ Build: Packages the compiled code into artifacts that can be deployed to the production environment.

ˆ Publish To Nexus: Stores the build artifacts in Nexus for future use.

ˆ Build & Tag Docker Image: Builds a Docker image and tags it for easy identification and versioning.

ˆ Docker Image Scan: Scans the Docker image with Trivy for any potential vulnerabilities.

ˆ Push Docker Image: Pushes the verified Docker image to a private Docker Hub repository for safe
storage and access.
ˆ Deploy To Kubernetes: Deploys the application to the Kubernetes cluster for container orchestration.

ˆ Verify the Deployment: Confirms that the pods and services are correctly deployed in the Kubernetes
environment.

ˆ Email Notification: Sends status updates and Trivy scan reports via email to relevant stakeholders,
ensuring that everyone is informed of the deployment status.

22
Chapter 5

Implementation

5.1 Development Phase


In the development phase, the initial groundwork for application security is laid out, ensuring a secure foundation
for the subsequent stages of the DevSecOps pipeline.

ˆ Developer Writes Code and Tests Locally: The process begins with the developer writing the
application code. During this stage, unit tests are conducted locally by the developer to ensure the basic
functionality of the application and to verify that individual components work as intended. This is the
first step to detecting early issues in the code.

ˆ Code is Pushed to GitHub: Once the local tests have passed and the developer is satisfied with the
progress, the code is pushed to a version control repository, such as GitHub. This ensures that the code
is stored securely and provides a record of all changes made, enabling easier collaboration with team
members and tracking of progress over time.

5.2 Build and Package Phase


The build and package phase is critical for ensuring that the code is compiled, dependencies are managed, and
the application is prepared for deployment. Security checks are also integrated into this phase.

ˆ Maven Builds the Application: Maven, a build automation tool, is used to compile the application
code, manage dependencies, and create the final packaged application. This tool helps automate the
building process and ensures that all necessary dependencies are correctly included in the final package.

ˆ Run Unit Tests: After the application is built, unit tests are executed to validate that the individual
components of the application work as expected. Unit testing helps identify potential issues early in the
process, ensuring that the application performs correctly.

ˆ Code Quality Check with SonarQube: SonarQube is integrated into the pipeline to perform static
code analysis. It helps identify various issues in the code, such as bugs, security vulnerabilities, and code
smells. This process enforces coding standards and ensures the application maintains high-quality, secure
code throughout the development cycle.

ˆ Build/Package the Application: The application is packaged into a deployable format, often as a
Docker image. This format allows for easy deployment across various environments, ensuring the consis-
tency and reliability of the application when it is moved to production.

5.3 Deployment Phase


The deployment phase is where the application is prepared for production. The goal is to ensure that the
application can be deployed smoothly while maintaining security and integrity.

23
ˆ Docker Push: After the application has been built and packaged, the Docker image is pushed to a
container registry, such as Docker Hub. This registry serves as a central repository where the Docker
images are stored and can be retrieved when needed for deployment. The image is now ready to be
deployed to the production environment.

ˆ Vulnerability Scan with Snyk and Scout: In this step, tools like Snyk and Docker Scout are used to
perform a detailed vulnerability scan on the Docker image. These tools check for known security issues and
vulnerabilities in the container, ensuring that the deployed application is secure and does not introduce
any risks into the production environment.

5.4 Decision Point


Following the completion of the build, security scans, and vulnerability assessments, a critical decision is made
based on the results to determine how the application will be deployed.
ˆ Deployment to Cloud Services or On-Premises: Based on the results of the vulnerability scans and
other factors, the application is either deployed to a cloud service such as AWS, Azure, or Google Cloud,
or to on-premises infrastructure. This decision is influenced by factors such as security, cost, scalability,
and the deployment strategy chosen for the application.

24
Chapter 6

Testing and Results

6.1 Jenkins Activation and Terraform Triggering


When Jenkins is activated, Terraform is immediately triggered to create the entire architecture required for
the pipeline. Terraform joins at the deployment stage, while the rest of the pipeline focuses on code quality,
vulnerability checks, and other stages. The infrastructure setup by Terraform ensures that the pipeline runs
smoothly with all necessary resources provisioned.

6.2 Codebase Selection


In the first step of the pipeline, a full-stack application (in this case, a board game) is selected as the codebase.
The code is pushed to a GitHub repository, as shown in Figure 6.1, marking the beginning of the pipeline
process.

Figure 6.1: GitHub Repository with the Full-Stack Game Codebase

6.3 Pipeline Execution in Jenkins


Jenkins plays a crucial role in orchestrating the build process. The pipeline for the board game application is
defined in Jenkins, and all the stages are executed from here. As shown in Figure 6.2 and Figure 6.3, Jenkins
oversees the entire process from build to deployment.

25
Figure 6.2: Jenkins Pipeline Overview

Figure 6.3: Jenkins Console Output for the Board Game Pipeline

6.4 Compilation and Unit Testing with Maven


Maven is responsible for compiling the full-stack game application and running unit test cases. This step ensures
that the code is functioning as expected before moving further in the pipeline. The unit testing is performed
for all levels of the codebase, verifying its correctness.

6.5 Code Quality Check with SonarQube


Once the unit tests are successfully run, SonarQube performs a code quality check. This step analyzes the code
to ensure it follows good coding practices and meets industry standards. As shown in Figure 6.4 Figure 6.5,
two images depict the code quality analysis process carried out by SonarQube.

26
Figure 6.4: SonarQube Code Quality Report

Figure 6.5: SonarQube Code Quality Analysis

6.6 Package Building and Nexus Repository

After the dependency checks and the code quality analysis, Maven builds the package for the full-stack game and
stores it in the Nexus repository, as shown in Figure 6.6. This stage finalizes the packaging of the application
after all prior checks have passed.

27
Figure 6.6: Nexus Repository with the Stored Package

6.7 Docker Image Generation and Push


In the next step, a Docker image is generated for the board game application. The Docker image is built and
then pushed to Docker Hub for further processing, as shown in Figure 6.7. Docker ensures that the application
can run in a consistent environment across various platforms.

Figure 6.7: Docker Image Generation and Push to Docker Hub

6.8 Vulnerability Check with Aqua Trivy


Aqua Trivy is used to scan the Docker image for vulnerabilities. Trivy, combined with a personalized script,
checks for vulnerabilities across various parameters and categorizes them into high, medium, and low-risk levels.

28
A report is generated, as depicted in Figure 6.8. The table below summarizes some of the key findings from the
scan, listing the affected libraries, the specific vulnerabilities identified (CVE IDs), and their respective severity
levels. As shown, most vulnerabilities are categorized as low risk, while a few medium-severity vulnerabilities
require further attention to ensure the security of the Docker image.

Figure 6.8: General Parameters considered for vulnerability scanning

This table provides a brief overview of the vulnerabilities detected in the chessgame image (based on Ubuntu
20.04). Several libraries contain vulnerabilities with varying severities, ranging from low to medium. The
vulnerabilities need to be addressed to enhance the security posture of the image, especially the medium-severity
ones.

29
Library Vulnerability Severity
coreutils CVE-2016-2781 LOW
gpgv CVE-2022-3219 LOW
libc-bin CVE-2016-20013 LOW
libgcrypt20 CVE-2024-2236 LOW
libgssapi-krb5-2 CVE-2024-26462 MEDIUM
libgssapi-krb5-2 CVE-2024-26458 LOW
libgssapi-krb5-2 CVE-2024-26461 LOW
libk5crypto3 CVE-2024-26462 MEDIUM
libk5crypto3 CVE-2024-26458 LOW
libkrb5-3 CVE-2024-26462 MEDIUM

Table 6.1: Vulnerabilities in chessgame image

6.9 KubeAudit for Security Check

KubeAudit conducts a secondary security check on the Docker image. If Aqua Trivy fails or does not scan
properly, KubeAudit ensures that the image is secure before it proceeds. This step adds an additional layer of
security, as shown in Figure 6.9.

Figure 6.9: KubeAudit Security Check

6.10 Email Notifications and Successful Build

After the vulnerability scans, an email is generated and sent to the administrators. The email contains the
vulnerability report and notifies the admins that the build has been successful. Example seen in this image
Figure 6.10.

30
Figure 6.10: Email Recieved by Admit regarding the vulnerability scan

6.11 Application Deployment and Monitoring

Once the image is deemed secure, the full-stack application is deployed. Prometheus and Grafana are used to
monitor the deployment and gather traffic metrics. The monitoring tools help track the performance of the
application post-deployment, as shown in Figure 6.11, Figure 6.12, and Figure 6.13.

Figure 6.11: Prometheus collecting data

31
Figure 6.12: Grafana representing the data collected by prometheus

Figure 6.13: Grafana representing the data collected by prometheus

6.12 AI-Powered Vulnerability Chatbot

Finally, AI Ops is implemented to allow users to interact with a chatbot that can analyze the vulnerabilities
found in the Docker image. This chatbot, powered by Ollama 3 and Google Gemini, provides detailed insights
into the severity and causes of vulnerabilities based on simple user queries, as shown in Figure 6.14, Figure 6.15.

32
Figure 6.14: AI-Powered Vulnerability Chatbot working for chessgame image (Low Risk)

Figure 6.15: AI-Powered Vulnerability Chatbot working for staticgames image (High Risk)

33
Chapter 7

Conclusion and Future Scope

7.1 Conclusion
Containerization has revolutionized software deployment, providing consistency and scalability across environ-
ments. Docker, as a key player in containerization, enables efficient packaging of applications and dependencies.
However, as the usage of Docker increases, securing these containers becomes paramount. Vulnerabilities in
Docker images can be exploited if not properly addressed, posing significant security risks.
This project focused on improving the security of Docker images by creating a novel vulnerability detection
algorithm, specifically designed to outperform existing tools like Trivy. By integrating this custom algorithm
into a DevSecOps pipeline, we were able to address security concerns at the earliest stages of the development
lifecycle. The algorithm we developed leverages advanced techniques, making it more accurate and efficient
than Trivy in detecting vulnerabilities within Docker images. This proactive approach ensures that vulnerabil-
ities are identified and mitigated before they can be exploited, enhancing the overall security of containerized
applications.
The integration of this vulnerability detection system with the DevSecOps pipeline exemplifies the impor-
tance of incorporating security from the beginning of development. By embedding security measures within the
continuous integration and continuous deployment (CI/CD) process, security becomes an integral part of the
development cycle, ensuring a robust defense against emerging threats.
In addition to vulnerability scanning, the project also explored the potential of leveraging AIOps for analyzing
CVE scan reports. This integration allowed for more intelligent and automated decision-making in security
incident management, offering real-time insights into vulnerabilities and ensuring rapid response to detected
threats. Overall, this project successfully demonstrated that securing Docker images through the creation of
a better-than-existing vulnerability detection algorithm, coupled with DevSecOps practices, can significantly
reduce security risks and ensure the integrity of the entire software delivery process.

7.2 Future Scope


While the current implementation provides a solid foundation for securing Docker images, several promising
avenues exist for further improvement and expansion. The future scope of this project includes the following
areas:
ˆ Enhanced Vulnerability Detection Algorithms: While our custom algorithm has outperformed
Trivy in certain scenarios, there is always room for improvement. Future developments could involve the
integration of machine learning models or AI-driven approaches to detect vulnerabilities more efficiently
and accurately. These models could evolve to recognize zero-day vulnerabilities and provide real-time
protection against previously unknown threats.
ˆ Comprehensive Security Automation: Expanding on the current implementation, the creation of
a fully automated security framework that not only detects vulnerabilities but also remediates them
could improve operational efficiency. Such a system could automatically patch vulnerabilities or provide
actionable fixes to developers, thus streamlining security management.
ˆ Real-Time Runtime Monitoring: Extending the vulnerability detection capabilities to runtime envi-
ronments would allow us to identify and mitigate vulnerabilities as applications run in production. This

34
real-time scanning would help address security gaps that may emerge during the application’s lifecycle,
which are not detectable during image creation.
ˆ Integration with Advanced DevSecOps Tools: Further integration with additional DevOps tools,
such as CI/CD systems, security monitoring platforms, and cloud-native security services, could enhance
the security pipeline. By automating vulnerability detection throughout the entire software lifecycle,
security would become a continuous process, with real-time feedback and constant improvement.
ˆ Community Collaboration and Open-Source Contributions: As container security evolves, col-
laborating with the open-source community can help drive innovation and refinement of the vulnerability
detection algorithm. Sharing knowledge, techniques, and resources can lead to more robust security
solutions and foster a collaborative environment for continuous improvement.
ˆ Scalable Solutions for Multi-Cloud Environments: As organizations increasingly adopt multi-cloud
and hybrid cloud architectures, it will be crucial to develop scalable security solutions that work seamlessly
across different cloud environments. The ability to secure Docker images on AWS, Azure, Google Cloud,
and on-premises infrastructures will be vital for ensuring consistency in security measures across diverse
deployment environments.
ˆ Incorporating Compliance Frameworks: As security regulations and compliance standards evolve,
integrating compliance checks within the DevSecOps pipeline will become increasingly important. Auto-
mated tools to ensure compliance with standards like GDPR, HIPAA, or SOC 2 will help organizations
reduce the overhead of regulatory management while maintaining a secure environment for their applica-
tions.
ˆ AI-Powered CVE Report Analysis: Building on the AIOps integration, further improvements could
involve developing more advanced AI models to analyze CVE scan reports. These models could classify
vulnerabilities based on their severity and exploitability, offering proactive insights and recommendations
to mitigate security risks before they become critical.

With continued advancements in vulnerability detection, automation, and collaboration, this project lays the
foundation for a more secure, resilient, and efficient DevSecOps pipeline that will be essential as containerization
and cloud-native technologies continue to dominate the software development landscape.

35
References

[1] R. Malhotra, A. Bansal, and M. Kessentini, “Vulnerability analysis of docker hub official images and verified
images,” 2023.
[2] Y. Zheng, W. Dong, and J. Zhao, “Zerodvs: Trace-ability and security detection of container image based
on inheritance graph,” pp. 186–192, 2021.
[3] S. B. Hulayyil, S. Li, and L. Xu, “Machine-learning-based vulnerability detection and classification in
internet of things device security,” Electronics, vol. 12, no. 18, 2023.
[4] O. Tunde-Onadele, J. He, T. Dai, and X. Gu, “A study on container vulnerability exploit detection,” pp.
121–127, 2019.
[5] W. S. Shameem Ahamed, P. Zavarsky, and B. Swar, “Security audit of docker container images in cloud
architecture,” pp. 202–207, 2021.
[6] V. Jain, B. Singh, N. Choudhary, and P. K. Yadav, “A hybrid model for real-time docker container threat
detection and vulnerability analysis,” International Journal of Intelligent Systems and Applications in
Engineering, 2023.
[7] I. Docker, “Docker containers: A security perspective,” Containerization Journal, vol. 15, no. 3, pp. 120–
135, 2023.
[8] T. Zhang, L. Wu, and Y. Liu, “A survey of virtualization technologies in cloud computing,” Journal of
Cloud Computing and Virtualization, vol. 10, no. 4, pp. 231–245, 2023.
[9] G. Thompson and J. Roberts, “Security challenges in containerized environments,” Security and Privacy
Journal, vol. 19, no. 2, pp. 56–71, 2023.
[10] H. Kim, J. Park, and K. Lee, “Towards an understanding of docker images and performance consequences
on container storage systems at scale,” Container Technology Journal, vol. 8, no. 1, pp. 12–27, 2023.
[11] R. Patel, A. Kumar, and P. Singh, “Hyperion: Impact hardware high-performance and secure system for
container networks,” Journal of Secure Networking, vol. 12, no. 4, pp. 399–411, 2023.
[12] X. Li, M. Chen, and J. Zhang, “A hybrid system call profiling model for container protection,” Journal of
Network and Computer Security, vol. 7, no. 2, pp. 98–112, 2023.
[13] L. Zhao and Y. Zhang, “Cloud migration research: A systematic review,” Cloud Computing Review, vol. 18,
no. 3, pp. 125–140, 2023.
[14] C. Lin, M. Sun, and H. Wang, “Sysflow: Towards a programmable zero trust security architecture for
systems,” Journal of Cybersecurity and Cloud Computing, vol. 14, no. 2, pp. 67–81, 2023.
[15] F. Liu, R. Zhang, and J. Li, “Container cloud security vulnerability: An empirical analysis of security risks
associated with information leakages,” Journal of Cloud Security, vol. 9, no. 1, pp. 78–92, 2023.
[16] S. Williams and T. Davis, “Ambush from all sides: Analyzing security threats for open-source swe ci/cd
pipelines,” International Journal of Software Engineering and DevOps, vol. 5, no. 3, pp. 245–259, 2023.
[17] W. et al., “Condo: Enhancing container isolation through kernel permission data protection,”
ACM Transactions on Computer Systems, vol. 41, no. 1, pp. 1–27, 2023. [Online]. Available:
https://dl.acm.org/doi/10.1145/3501414

36
[18] L. et al., “Divds: Docker image vulnerability diagnostic system,” International Journal of Information
Security, vol. 22, no. 3, pp. 205–223, 2023. [Online]. Available: https://link.springer.com/article/10.1007/
s10207-023-00598-1

[19] Z. et al., “Malicious investigation of docker images on basis of vulnerability databases,”


Journal of Container Security, vol. 15, no. 4, pp. 321–335, 2023. [Online]. Available: https:
//link.springer.com/article/10.1007/s11416-023-00487-2

[20] Y. et al., “Monitoring solution for cloud-native devsecops,” International Journal of Cloud
Computing and DevSecOps, vol. 10, no. 3, pp. 120–135, 2023. [Online]. Available: https:
//www.journals.elsevier.com/international-journal-of-cloud-computing-and-devsecops

[21] S. et al., “Reflections on trusting docker: Invisible malware in continuous integration systems,”
Journal of Cybersecurity in CI Systems, vol. 8, no. 2, pp. 45–61, 2023. [Online]. Available:
https://www.journals.elsevier.com/journal-of-cybersecurity-in-ci-systems

[22] J. et al., “Security analysis of docker containers for arm architecture,” Journal of Container Security
for IoT, vol. 5, no. 1, pp. 22–35, 2023. [Online]. Available: https://www.journals.elsevier.com/
journal-of-container-security-for-iot

[23] K. et al., “Security audit of docker container images in cloud architecture,” Journal of
Cloud Security and Computing, vol. 7, no. 2, pp. 45–58, 2023. [Online]. Available: https:
//www.journals.elsevier.com/journal-of-cloud-security-and-computing

[24] C. et al., “Should you upgrade official docker hub images in production environments?”
Journal of Docker Container Management, vol. 9, no. 1, pp. 32–45, 2023. [Online]. Available:
https://www.journals.elsevier.com/journal-of-docker-container-management

[25] L. et al., “The practice and application of a novel devsecops platform on security,” Journal
of DevOps and Security Technology, vol. 12, no. 3, pp. 53–68, 2023. [Online]. Available:
https://www.journals.elsevier.com/journal-of-devops-and-security-technology

[26] C. et al., “Vulnerability analysis of docker hub official images and verified images,” Journal of
Container Security and Cloud Infrastructure, vol. 14, no. 2, pp. 75–88, 2023. [Online]. Available:
https://www.journals.elsevier.com/journal-of-container-security-and-cloud-infrastructure

[27] Team, “Static game source code,” https://github.com/anandkumarrai02/static-game, 2024.

[28] S. Kwon and J.-H. Lee, Divds: Docker image vulnerability diagnostic system, 2020.

[29] N. Zhao, V. Tarasov, H. Albahar, A. Anwar, and L. Rupprecht, Large-scale analysis of docker images and
performance implications for container storage systems, vol. 32, no. 4, pp. 918–930, 2021.

[30] V. Divya and R. L. Sri, Docker-based intelligent fall detection using edge-fog cloud infrastructure, vol. 8,
no. 10, pp. 8133–8144, 2021.

[31] R. Hat, “Container security: Fundamentals and practical advices,” 2023. [Online]. Available:
https://www.redhat.com/en/topics/security/container-security

[32] P. A. Networks, “Devsecops: A practical guide,” 2023. [Online]. Available: https://www.paloaltonetworks.


com/blog/prisma-cloud/category/devsecops/

[33] T. N. Stack, “Implementing devsecops best practices,” 2022. [Online]. Available: https://thenewstack.io/
implementing-devsecops-best-practices/

[34] C. I. of Technology, “What is devsecops? definition, benefits, best practices,” 2023. [Online]. Available:
https://www.caltech.edu/about/news/what-devsecops-definition-benefits-best-practices

[35] R. H. Developer, “Devsecops: Secure code quickly and easily,” 2021. [Online]. Available:
https://developers.redhat.com/blog/2021/03/15/devsecops-secure-code-quickly-and-easily

[36] Akto, “Devsecops roadmap 2024,” 2024. [Online]. Available: https://akto.io/devsecops-roadmap-2024/

37
[37] DevOps.com, “Devsecops implementation process and road map,” 2022. [Online]. Available:
https://devops.com/devsecops-implementation-process-and-road-map/
[38] P. DevSecOps, “What is devsecops pipelines?” 2022. [Online]. Available: https://practicaldevsecops.com/
what-is-devsecops-pipelines/

[39] DZone, “Devsecops: A complete guide,” 2023. [Online]. Available: https://dzone.com/guides/


devsecops-a-complete-guide
[40] D. C. Exchange, “Devsecops operational container scanning,” 2023. [Online]. Available: https:
//public.cyber.mil/dod-cyber-exchange/devsecops-operational-container-scanning/

[41] S. Boulevard, “Implementing devsecops in ci/cd pipelines,” 2022. [Online]. Available: https:
//securityboulevard.com/2022/04/implementing-devsecops-in-ci-cd-pipelines/
[42] Sonatype, “Devsecops: Bringing security into devops,” 2023. [Online]. Available: https://www.sonatype.
com/devsecops

38

You might also like