DevOps Engineer
Name: MITHUN KUMAR
Email:
[email protected]Phone No: 4703439576
LinkedIn Id : https://www.linkedin.com/in/mithunkranthi/
PROFESSIONAL SUMMARY:
Around 11+ Years of experience in IT Infrastructure, Cloud Computing, Site Reliability Engineer, DevOps, Build and
Release management, Virtualization networking, building, automation deployment, (Red Hat Enterprise Linux)
cloud implementation, configuration, and troubleshooting.
Good understanding of the principles and best practices of Software Configuration Management (SCM) in Agile,
Scrum, and Waterfall methodologies.
Expertise in using scripting languages such as JavaScript, Shell, Go, Bash, Perl, Ruby, Groovy, and Python to
improve builds, deployments, and automated repetitive tasks.
Proficient in implementing CI/CD using Jenkins, GitHub Actions, and Harness across AWS, Azure, and GCP.
I am skilled in Kubernetes, Terraform, Helm, and monitoring tools like Prometheus and Grafana. Strong background in
supporting enterprise-scale systems with a focus on performance, automation, and reliability.
Expertise in setting up the SCM standards/processes for development groups, designing branching and labeling.
Experience working with log monitoring tools like ELK Stack, Nagios, Splunk, Prometheus, Grafana.
Experienced in working on source controller tools like GIT, Subversion (SVN), and CVS and experience with
migrating code base from SVN to GIT.
Working with Automating the Google cloud platform Infrastructure using GCP Cloud Deployment Manager and
Securing the GCP infrastructure using Private subnets, Security groups, NACL (VPC), etc.
Familiarity with DevSecOps principles, implementing security measures throughout the development lifecycle.
Extensive experience in using Build Automation tools like ANT, Maven, Gradle
Experienced in orchestration of Docker Containers using ECS and Kubernetes as container management and
worked with Terraform to code infrastructure.
Experienced in working with monitoring and observability tools including Dynatrace, Splunk, Prometheus,
Grafana, ELK, CloudWatch, and Datadog for proactive performance monitoring and incident management.
Skilled in leveraging Dynatrace AI-driven analytics for root cause analysis, service health monitoring, and
improving system reliability in large-scale cloud and on-prem environments.
Had very strong exposure using ansible automation in replacing the different components of EKS like Kubeflow.
Expert in various Azure services like Compute (VM), Caching, Azure SQL, NoSQL, Storage, and Network
services, Azure Active Directory (AD), API Management, Scheduling, Azure Autoscaling.
Expertise in creating DevOps strategy in a mix environment of Linux servers along with Amazon Web Services.
Extensively worked on Hudson/Jenkins, for Continuous Integration and end-to-end automation for all build and
deployments.
Worked on AWS EKS cluster and Kubeflow environment for ML/Ops deployment using Yaml scripts, Argo CD and
GitHub packages.
Expertise in Application Deployments & Environment configuration using Ansible, Chef, Puppet.
Worked on AWS services (EC2, S3, RDS, EBS, ECS, ELK, ELB, IAM, VPC, Dynamo DB, Route53, SQS and SNS)
Managed Clusters with various Servers in Azure Cloud Resource Groups and Monitored via remotely run
scripts Ambari, Azure Data Factory and Blobs.
Managing Amazon Web Services infrastructure with automation and configuration management tools such
as Terraform, Ansible, Chef, Puppet, and Salt Stack.
Worked on several Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry.
Experienced in Configuration management tools like Puppet, Chef, Ansible and expertise in developing
Recipes/Manifests and Python scripts to automate the environment.
Developed Python scripts to automate AWS services which include ELB, Cloud Front distribution, EC2,
Route53, Auto scaling, ECS, Security groups, Aurora and S3 bucket.
Extensively worked on Jenkins, Semaphore, Bamboo, and Spinnaker for continuous integration/deployment and
for End-to-End automation for all build.
Installed, Configured, Managed Monitoring Tools such as Cloud Watch, Splunk and Nagios.
Established provisioning on Azure PaaS/IaaS, VSTS CI and Nodejs/c#, RESTAPI application development.
Used TFS for project management in agile software development.
Experienced in web/application servers like JBOSS, WebSphere, WebLogic, Tomcat, Nginx.
TECHNICAL SKILLS:
Amazon Web Services IAM, EC2, ELB, EBS, Route 53, S3, AMI, Cloud Watch, Cloud Front, RDS, Lambda,
VPC, Glacier, SQS, Dynamo DB.
Configuration Management Chef, Puppet, Ansible
Languages/Scripting Languages Bash Shell Scripting, Python Scripting, Ruby, Perl, C, C++, Java, J2EE
Database Postgres, Oracle, DB2, MySQL
Cloud Environment Amazon Web Services, Microsoft Azure, Google Cloud Platform (GCP),
OpenStack
Build Utility Tools MAVEN, ANT, Nexus
Version Control Tools Subversion (SVN), GIT Hub
Containerization Tools Docker, ECS, Kubernetes
Web Servers Apache, Tomcat, JBOSS, WebSphere, WebLogic.
Continuous Integration Tools Jenkins, Hudson
Bug Reporting Tools Bugzilla, JIRA, Lean Testing
CI/CD Tools ArgoCD, Jenkins, GitHub Actions, GitLab CI, Tekton, Octopus Deploy
Monitoring Nagios, Splunk.
SDLC Agile, Scrum, Waterfall.
Virtualization Tools Oracle Virtual Box, VMware Workstation
Operating Systems Red Hat, Ubuntu, CentOS, Linux, Mac, and WINDOWS
PROFESSIONAL EXPERIENCE
Client: MUFG, NYC Aug 2024-Till Date
Role: DevOps Engineer/Site Reliability Engineer
Responsibilities:
Manage and maintain Cloud Services using AWS Cloud Formation, helping developers and businesses in an easy way
with provision them in an orderly and predictable fashion.
Managing Amazon EC2 Cloud Instances using Amazon Images (Linux/ Ubuntu) and configuring launched instances
with respect to specific applications.
Led production war rooms and used Dynatrace service flow maps to isolate root causes during high-severity
(P1/P2) incidents, reducing MTTR.
Defined and tracked SLOs/SLAs using Dynatrace dashboards for critical banking applications.
Automated anomaly detection and alerting in Dynatrace, improving proactive incident prevention.
Partnered with developers to implement performance tuning based on Dynatrace transaction flow insights,
optimizing cloud costs and reliability.
Responsible for day-to-day Build and deployments in Dev, QA, pre-production, and production environments.
Implemented AWS high availability using AWS Elastic Load Balancing (ELB), which performed balance across
instances in multiple availability zones.
Implemented and managed Harness pipelines to automate deployments of critical microservices across AWS
environments with RBAC-controlled approvals and rollback policies.
Migrated existing Jenkins jobs to Harness, enabling simplified visual pipelines and improved observability of
deployment health.
Customized Harness templates and environment strategies to support multi-cluster Kubernetes deployments using
Helm and ArgoCD.
Integrated Dynatrace monitoring into AWS and Kubernetes environments to track microservices latency, errors, and
resource utilization.
Created Dynatrace dashboards for real-time visibility of production applications, improving incident response
during war rooms.
Automated performance alerts and reporting through Dynatrace API, helping stakeholders monitor SLAs and system
health.
Created Harness pipelines for internal tools using GitOps model with approval stages, Slack notifications, and policy
gates.
Integrated Harness with GitHub and Terraform to support Infrastructure as Code and governed release workflows in
compliance with banking security standards.
Supported on High Availability, Big Data solutions and Storage systems and planning for backup strategies.
Manage and handle the user accounts (IAM), RDS, Route 53, VPC, RDB, Dynamo DB, SES, SQS and SNS services in AWS
cloud.
Managing virtual data center in the Amazon Web Services cloud to support Enterprise Data Warehouse hosting
including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables, Elastic Load
Balancer.
Supported highly durable and available data by using S3 data store, versioning, lifecycle policies, and create AMIs for
mission critical production servers for backup.
Supported multi-tier application provisioning in Open Stack cloud, integrating it with Jenkins.
Managed and Configured AWS cloud infrastructure as code using terraform and continuous deployment through
Jenkins.
Implemented and managed infrastructure to support Databricks clusters on cloud platforms like AWS, Azure, or GCP
using Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
Managed a team of a junior and two engineers to build and manage a large deployment of Red Hat Linux instances
systems with Ansible, Terraform Automation and provision virtual servers using vagrant and kitchen in Oracle VM
virtual box, provisioned servers in Amazon EC2.
Wrote AWS Lambda functions in python for AWS's Lambda which invokes python scripts to perform various
transformations and analytics on large data sets in EMR clusters.
Implemented Continuous Integration using Jenkins and GIT from scratch.
I build and deploy applications focused on being scalable and secure, using CI/CD pipelines and version control tools
like GitLab and GitHub.
Created Ansible playbooks to automatically install packages from a repository, to change the configuration of remotely
configured machines and to deploy new builds.
Worked on Migrating the Legacy application into GCP platform and managing the GCP services.
Experienced working with Ansible tower. Integrating Ansible tower with Jenkins to deploy code to different servers.
Creating Ansible roles using YAML such as tasks, variables, files, handlers, templates and writing playbook for that
role.
Managed a PaaS for deployments using Dockers/ECS, Ansible, which reduced considerably deployment risks.
Implemented procedures to unify streamline and automate application development and deployment procedures with
Linux container technology using Dockers.
Implemented containerization and immutable infrastructure. A docker has been core to this experience, along with
Marathon, and Kubernetes.
Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on GCP (Google
Cloud Platform).
Experience with threat modeling, especially for web applications and web APIs.
Build high performance web services, using languages such as Python and Ruby.
Developed build scripts using ANT/MAVEN and Grade as the build tools for the creation of build artifacts like war or
ear files.
Dealt with errors in pom.xml file to obtain appropriate builds using maven build tool.
Worked with different bug tools like JIRA and IBM Clear Quest.
Designed and worked with team to implement ELK (Elastic search, Log stash and Kibana) Stack on AWS.
Environment: Dockers, ECS, Ansible, Terraform, AWS, EC2, S3, VPC, ELB, Cloud Watch, Dynamo DB, SNS, SQS, API
Gateway, Auto scaling, EBS, RDS, Jenkins, GIT, Linux, GCP, LAMP, Nagios, Maven, Apache Tomcat, Shell, Perl, and Python.
Client: UHG, Chicago IL Sep 2023-Jul 2024
Role: DevOps Engineer/Site Reliability Engineer
Responsibilities:
Worked in highly collaborative operations team to streamline the process of implementing security
Confidential Azure cloud environment and introduced best practices for remediation.
Gathering the requirements from the clients about the existing applications to apply the security measures
Understand the latest features like (Azure DevOps, OMS, NSG Rules, etc..,) introduced by Microsoft Azure and utilized
it for existing business applications
Creating, validating and reviewing solutions and effort estimate of converting existing workloads from classic to ARM
based Azure Cloud Environment.
Integrated testing automation into CI/CD pipelines to ensure quality and compliance with Agile principles.
We used Jenkins and pipelines to drive all microservices builds out to the Docker registry and then deployed to
Kubernetes.
Deployed Dynatrace OneAgent across Azure workloads and Kubernetes clusters to establish full-stack observability.
Integrated Dynatrace with Azure DevOps pipelines to automatically validate application performance during each
release.
Configured custom dashboards and alerts in Dynatrace for dev and QA teams to improve release confidence.
Used Dynatrace logs and metrics in post-incident reviews, driving long-term fixes and reducing recurring outages.
Designed CI/CD pipelines in Harness for Azure-based applications, reducing deployment lead time by 30% and
improving release reliability.
Configured Harness integrations with Azure Repos and Key Vault for secure secrets management and dynamic
environment provisioning.
Experience in package and patches management on Linux and Solaris servers, firmware upgrades and debugging.
Developed automation system using PowerShell scripts and JSON templates to remediate the Azure services
Worked on implementing backup methodologies by Power Shell Scripts for Azure Services like Azure SQL Database,
Key Vault, Storage blobs, App Services etc.
Created Azure services using ARM templates (JSON) and ensured no changes in the present infrastructure while doing
incremental deployment.
Created CI/CD pipelines for .NET, python apps in Azure DevOps by integrating source code repositories such as
GitHub, VSTS, and artifacts. Created deployment areas such as testing, pre-production, and production environment in
Kubernetes cluster.
Implemented Agile DevOps practices by automating CI/CD pipelines using Jenkins, GitLab CI/CD, or CircleCI.
Monitored sprint progress using Agile tools like Jira, Rally, or Azure DevOps.
Managed EKS node groups, nodes with upgrades, decommission them from active participation by evacuating the
nodes and upgrading them
Used Dynatrace to detect performance bottlenecks in CI/CD deployments, reducing downtime during critical
releases.
Collaborated with development and QA teams by sharing Dynatrace service flow insights for faster troubleshooting.
Managing the Infrastructure on Google cloud Platform using Various GCP services. Configuring and deploying
instances on GCP environments and Data centers, also familiar with Compute.
Configure the XL Deploy and XL Release for all the applications from scratch, once the build package is available, then
promoted with simple enable options to deploy in the targeted servers.
Worked on Azure IoT product development, duties which were involved in designing the new LoRaWan based on
embedded Linux hardware and software.
Collaborated with DevOps team and responsible for specialization in Jenkins automation.
Acted as build and release engineer, UI and CICD pipelines, IaaS, PaaS, SaaS, DevSecOps, deployed the services by
VSTS (Azure DevOps) pipeline. Created and Maintained pipelines to manage the IAC for all the applications
Wrote power shell scripts to create the parameter files automatically for all the services in Azure Resource Manager
Worked with Terraform Templates to automate the Azure Iaas virtual machines using terraform modules and
deployed virtual machine scale sets in production environment.
Managed Azure Infrastructure Azure Web Roles, Worker Role, Azure SQL, Azure Storage, Azure AD Licenses, Virtual
Machine Backup and Recover from a Recovery
Written Templates for Azure Infrastructure as code using Terraform to build staging and production
Worked on container orchestration with Kubernetes container storage, automation to Troubleshooting issues
and multi-regional deployment models and patterns for large-scale applications.
Deploying windows Kubernetes (K8s) cluster with Azure Container Service (ACS) from Azure CLI.
Built end to end CI/CD Pipelines in Jenkins to retrieve code, compile applications, perform tests and push build
artifacts to Nexus Artifactory.
Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test and Octopus Deploy.
Backup MySQL database by creating script to run the MySQL dump and package it in a zip file.
Integrated GIT into Jenkins to automate the code check-out process used Jenkins for automating Builds and
Automating Deployments.
Shell scripting to automate the regular tasks like removing core files, taking backups of important files, file transfers
among servers.
Experience in writing scripts in Ruby and Python for automation.
Environment: Azure, Office 365, Terraform, Jenkins, Ansible, Azure ARM, Azure AD, Azure Site, Kubernetes, Python,
Ruby, XML, GCP, Shell Scripting, IaaS, PaaS, SaaS, DevSecOps, PowerShell, UI and CICD pipelines, JFrog Artifactory, Jenkins,
Git, Jira, GitHub, Ansible, Docker, Windows Server, TFS, VS Code.
Client: Wells Fargo, Hyderabad India Jan 2019-Jun 2022
Role: DevOps Engineer/Cloud Engineer
Responsibilities:
Worked on variety of Linux Platforms Red Hat Linux includes installation, configuring and maintenance
of applications on different environments Development, Test, Production.
Configured AWS stack to AMI management, Elastic Load Balancing, Auto Scaling, CloudWatch, EC2, EBS, IAM,
Route53, S3, RDS, and Cloud Formation.
Attached ELK stack with EC2 and ELB to store logs and metrics by using AWS lambda function
used AWS lambda as a serverless backend using python 3.6 boto3 libraries and Implemented lambda concurrency in
my company to use DynamoDB streams to triggers multiple lambdas parallelly
Designed and worked on a CI/CD pipeline supporting workflows for application deployments using Jenkins,
Artifactory, Chef, Terraform and AWS CloudFormation.
Supported and configured TFS/VSTS, Build Servers and Release Management servers.
Experiences working with Both A/B Deployment and Blue/Green Deployments
Orchestrated multiple ETL jobs using AWS step functions and lambda, also used AWS Glue for loading and preparing
data Analytics for customers.
Worked with JIRA for creating Projects, assigning permissions to users and groups for the projects & Created Mail
handlers and notification Schemes for JIRA.
Integrated ArgoCD with external CI/CD pipelines (e.g., Jenkins, GitHub Actions) to create a seamless end-to-end
deployment workflow.
Deployed and managed Dynatrace agents across Linux servers to capture key performance metrics alongside Splunk
and Datadog.
Leveraged Dynatrace dashboards during incident management to identify root causes and reduce MTTR.
Tuned SQL queries and application configurations based on Dynatrace recommendations, improving overall system
performance.
Built Dashboards in Datadog to monitor the infrastructure, Configured and installed Splunk agents on top of Linux
server and container to send logs to Splunk to collect logs.
Skilled in implementing CI/CD pipelines, automating the build, testing, and deployment procedures for PCF
applications, and seamlessly integrating PCF deployments with source control systems.
Worked on several Docker components like Docker Engine, Hub, Machine and Docker Registry.
Incorporated Amazon Ops works, which is a configuration management tool that uses Chef to automate the servers
that are configured and deployed across Amazon EC2 instances.
Configured the ELK stack for Jenkins logs, and syslog's and network packets using the beats plugins like File beat,
Metric beat and Packet beat
Worked with cloud platforms (AWS, GCP) to design and deploy highly available systems using Golang and
containerized environments (Docker, Kubernetes).
Configured Elasticsearch, Logstash and Kibana (ELK) for log analytics, full-text search, application monitoring in
integration with AWS Lambda and CloudWatch.
Create and Managing the VPC, Subnets and route table to connection between different zones.
Worked on Ansible playbooks, roles, include statements, vars, modules, check mode (dry run) and to automate the
installation of docker-engine, docker swarm cluster.
Integrated SonarQube with Jenkins for continuous inspection of code quality and analysis with SonarQube scanner
for Maven.
Extensively used JIRA as a ticketing tool for creating sprints, bug tracking, issue tracking, project management
functions, and releases.
Environment: Ansible, Puppet, AWS, Jenkins, GIT, Git Hub, GCP, Atlassian Stash, Maven, Nagios, Nexus, Cloud Watch,
Bash Shell, Perl Scripting, Python, PostgreSQL, MySQL, MariaDB RHEL, Ubuntu, SUSE, Solaris, Middleware Application
Servers, VMware Virtual Center, vSphere 5/6.
Client: CAST Software India Pvt Ltd, India April 2015 – Dec 2018
Role: DevOps Engineer/Cloud Engineer
Responsibilities:
As a Devops Engineer, I was responsible for automation and orchestration of cloud services offerings on AWS.
Managing Amazon Web Services (AWS) infrastructure with automation and configuration management using Ansible.
Designing cloud-hosted solutions, specific AWS product suite experience.
Deploy and monitor scalable infrastructure on Amazon web services (AWS) & configuration management using Chef.
Build application and database servers using AWS EC2 and create AMIs as well as use RDS for Oracle DB.
Creating S3 buckets, uploading files through dashboard and cli and managing policies for S3 buckets and Utilized S3
bucket and Glacier for storage and backup on AWS.
Excelled on creating AMI (AWS Machine Images) that utilizes ELB (Elastic Load Balancer) and Auto Scaling.
Creating several EC2 machines based on requirement, EBS storage and creating partition and mounting also.
Creating and configuring RDS Instances based on DBA requirement.
Handled integration of Maven/Nexus, Jenkins, GIT, Confluence and Jira.
Building/Maintaining Docker container clusters managed by Kubernetes, Linux, Bash, GIT, Docker, on GCP (Google
Cloud Platform). Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to build, test
deploys.
Implemented Dynatrace monitoring for AWS-hosted applications to track end-to-end user transactions.
Used Dynatrace for trend analysis and anomaly detection, enabling proactive fixes before issues impacted
production.
Implemented CI/CD pipelines for Redshift schema and data deployment.
Collaborated & strategized in designing a Terraform and deploying it in cloud deployment manager to spin up
resources on Google Cloud Platform (GCP) services like compute engine, cloud SQL, cloud load balancing, Storage,
Networking services, Disks, VPC, GKE, Pub/Sub, Cloud NAT.
Research on the open-source database DevOps tool Liquibase and Flyway. Installed the Liquibase, connected the
Oracle database and created tasks simulating the DevOps procedure.
Working on IAM service based on requirement creating Users & Group and adding roles and polices for the users.
Created monitors, alarms, and notifications for EC2 hosts using Cloud Watch.
Worked on performance monitoring tools like AWS cloud watch and Nagios.
Installation, configuration, and upgrade of Apache, JBOSS, Web sphere, MQSeries, Oracle & IBM Databases on the RHEL
and OEL Linux Systems manually and through Puppet.
Integrated Jenkins with Jira, Atlassian Tool and GitHub for streamlined software development processes.
Carried Deployments and builds on various environments using continuous integration tool Jenkins. Designed the
project workflows/pipelines using Jenkins as CI tool.
Started working GIT repository implementation. Defined branching strategies in GIT and implementation of best
practices.
Written Shell scripts to automate the deployment process.
Expertise in using build tools like MAVEN for the building of deployable artifacts such as war & ear from source code.
Application Deployments & Environment configuration using Chef.
Environment: AWS – EC2, VPC, S3, RDS, Jenkins, GCP, Maven, Nagios, Git.
Client: Dimension Data India Pvt. Ltd, India June 2013 – March 2015
Role: Systems Engineer
Responsibilities:
Created and managed deployments to ECS Cluster, AWS Lambda and provisioned additional resources like AWS SQS
Queues for data messaging.
Experience in migration of consumer data from one production server to another production server over the network
with the help of Bash and Perl scripting.
Used Jenkins pipelines to drive all microservices built out to the Docker registry and then deployed to Kubernetes.
Set up the scripts for creation of new snapshots and deletion of old snapshots in S3 using S3 CLI tools.
Configured the CI/CD pipeline using GitHub, Jenkins, Docker, and Kubernetes and also Configured the CI/CD
pipeline on the AWS Environment using AWS Code Pipeline, AWS Code Commit and AWS Code Build.
Install and configure Puppet master and agent in Unix/Linux Environments and writing Puppet modules to install
and configure for Middleware platform.
Expertise in configuring Chef Server, Workstation and bootstrapping Nodes, wrote Chef Cookbooks and recipes
using Ruby script.
Assisted in setting up Dynatrace monitoring for middleware applications running on JBoss, Tomcat, and WebLogic.
Generated Dynatrace performance reports to support capacity planning and system tuning.
Designed, implemented, and managed a CI/CD pipeline for a new product resulting in a 2x decrease in the time
to release a new product version.
Integrated Jenkins with Jira, Atlassian Tool and GitHub for streamlined software development processes.
Worked on Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, GITLAB, Docker,
on GCP (Google Cloud Platform). Utilized Kubernetes and Docker for the runtime environment of the CI/CD system to
build, test deploys.
Install and configured the JBoss EAP servers in standalone and Domain mode and performance tuning and
troubleshooting in Weblogic, JBoss, Tomcat and Apache server instances.
Created tagging standards for proper identification and ownership of EC2 instances and other AWS resources.
Deployed applications on AWS by using Elastic Beanstalk.
Responsible for designing roles and groups for users and resources using AWS Identity Access Management (IAM).
Utilize Cloud formation and Ansible by creating DevOps processes for consistent and reliable deployment
methodology creating and cloning virtual machines in VMware environment using Virtual Infrastructure client.
Responsible for health checks of the Linux servers.
Environment: AWS (EC2, VPC, ELB, S3, RDS, EBS, ELB, GCP, AWSCLI, Jenkins, Cloud Formation, Cloud watch), WebLogic,
Perl, Shell, Terraform, Docker and Kubernetes.
EDUCATION:
Master of Science in Cloud Computing Aug 2022 – May 2024
Campbellsville University, USA
Bachelor of Technology in Electronics and Communication Engineering June 2009 – May 2013
Annamalai University, India