Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
19 views1 page

Rajendra Behera Resume

Uploaded by

Azfar Imam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views1 page

Rajendra Behera Resume

Uploaded by

Azfar Imam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

RAJENDRA BEHERA

Data Engineer – Fresher | +91 7653098034 | [email protected]


LinkedIn: https://www.linkedin.com/in/rajendrbehera/

TECHNICAL SKILLS
 Programming Language: Python, C, C++, Java
 Backend Framework: Node.js,Express.js
 Web Development: HTML, CSS, JavaScript, Bootstrap, Tailwind CSS, React.js, Next.js, Streamlit
 Databases & Connectors: MySQL, Mongodb, Cassandra , Kafka Connect
 Data Warehouses: Hive, BigQuery, Snowflake
 Message Queue: Confulent Kafka
 Distributed Computation Frameworks: Apache Hadoop, Apache Spark
 Workflow Management: Airflow
 Cloud Platform: GCP, Azure
 Version Control: Git, GitHub
 CI/CD: GitHub Actions
 Containerization & Orchestration: Docker, Kubernetes
 Database Backend as a Service (BaaS): Supabase
 Infrastructure as Code (IaC): Terraform

EXPERIENCE
Azure Data Engineer – Intern | Grow Data Skills JUN 2024 – OCT 2024
 Assisted in building data engineering workflows using SQL, Python, and PySpark to process large datasets efficiently.
 Collaborated to design and implement data pipelines on GCP and Azure using Airflow and Databricks.
 Supported the integration of Kafka for real-time data streaming, ensuring smooth data flow across distributed systems.
 Worked with NoSQL database management system optimizing data retrieval and storage for high-performance applications.

PROJECTS
Fintech SQL Data migration into Azure Portal
 Prepare SQL database with tables having historical data.
 Setup data pipeline in Azure synapse to dynamically move data from SQL database to Bronze Layer (ADLS).
 Setup PySpark Notebook to read Bronze Layer Data and write transformed data into Silver Layer as Delta Tables.
 Setup PySpark Notebook to read Silver Layer Data and write transformed data into Gold Layer as Delta Tables.
Tech Stack : Python,Azure SQL Database, SQL , Azure Synapse, ADLS, PySpark, Delta Tables

Airlines Data Incremental data processing with CICD Process


 Setup Azure Devops Account and create Organization,project, Repository And Agent Pool for local machine
 Dev environment setup of ADF pipeline with ADLS as source and Synapse as target
 Setup automated release process and deployment of updated ARM templates in PROD ADF Pipleline
Tech stack : ADLS, Azure Data Factory, Azure Synapse,Logic Apps, GitHub,Azure DevOPS

EDUCATION
BACHELOR OF SCIENCE IN INFORMATION TECHNOLOGY AND MANAGEMENT
UTKAL UNIVERSITY | Odisha
2021-2024, CGPA: 7.45/10

You might also like