- 👋 Hi, I’m Carlota Gordillo
- 👀 I’m interested in Data Analysis and Machine Learning
- 🌱 I’m currently learning Data Analytics at Ironhack
- 📫 How to reach me by email: [email protected]
- 🔗 LinkedIn Profile : https://www.linkedin.com/in/carlota-gordillo-alvarez
This project focuses on optimizing retail inventory through data analysis and predictive modeling. By forecasting product demand, we reduce overstocking and stockouts, enhancing warehouse efficiency and increasing sales. Key features include:
- Demand Forecasting with advanced models like Prophet to predict weekly sales and optimize stock levels.
- Reorder Point (ROP) Calculation and Safety Stock metrics to ensure key products are always available, minimizing the risk of stockouts.
- Warehouse Optimization using clustering and route optimization to reduce retrieval times and improve operational efficiency.
- ABC Classification to categorize products based on their revenue contribution and demand, optimizing inventory investment.
- Personalized Recommendations based on customer behavior and product similarity, driving additional sales and customer loyalty. Interactive tools like Streamlit provide real-time insights, while Power BI enables dynamic sales and trend analysis.
Explore the repository to see how these strategies can transform retail inventory management and boost profitability.
Tool used: Python, Prophet, ARIMA, SARIMAX, Power BI, Clustering (K-Means, PCA), Optimization Algorithms.
This project aims to develop a machine learning model to predict a song's popularity on Spotify using musical features such as tempo, duration, genre, and other attributes. The Ultimate Spotify Tracks DB dataset from Kaggle was used, which contains various song-related variables.
The main goals were to identify the key factors that influence a song's popularity and build a predictive model. Exploratory Data Analysis (EDA) was performed to detect patterns in the most important variables, such as tempo and energy. Several machine learning algorithms were tested to address both classification and regression problems.
The most effective model was Random Forest, with an F1-Score of 0.834 in classifying popular songs. PCA was used to reduce dimensionality and improve model efficiency. Additionally, the project was visualized using tools like Power BI and Streamlit, providing an interactive interface to explore the results.
Tools used: Python, Pandas, Scikit-learn, Matplotlib, Seaborn, Power BI, Jupyter Notebooks.
The repository includes trained models, analysis notebooks, and Streamlit apps for visualizing results
This project analyzes the outcomes of a digital A/B test conducted by Vanguard, a leading investment management firm. The experiment aimed to determine whether a modernized user interface (UI) with contextual cues could improve online task completion rates. By comparing a control group using the traditional UI with a test group using the redesigned UI, we evaluated key performance metrics such as completion rates, error rates, and time spent on each step. Power BI was used to create interactive dashboards for data visualization, allowing for a deeper understanding of the results. The findings show a statistically significant increase in task completion for the test group, confirming that the updated UI enhances the user experience.
Tool used: Python, Seaborn, Matplotlib, Power BI
This project explores the intersection of lifestyle factors and mental health, aiming to prevent mental health issues among workers and enhance overall well-being. By analyzing data on sleep, stress, physical activity, and mental health treatments, we provide actionable insights to improve workplace wellness. Key findings highlight the relationship between sleep quality and stress, the effectiveness of therapy treatments, and how mental health trends vary across age, gender, and occupation. The project employs SQL, Python, and data visualization tools to analyze and communicate these insights effectively.
Tool used: SQL, Python, Seaborn, Matplotlib, Python.
The goal of this project is to analyze Spain’s real estate market to provide data-driven insights for smarter investment decisions. By gathering and processing data from various sources, including real estate portals like Idealista through web scraping, as well as official social and cadastral records, we evaluate key factors affecting housing prices and rental demand in Spain’s most populated cities. Using Python and data visualization tools, we uncovered significant trends such as high demand and limited supply in major cities, university-driven rental demand, and the influence of tourism on housing prices, helping investors make informed decisions.
Tool used: Python, Pandas, Web Scraping, Matplotlib, Seaborn.
This project aims to analyze and visualize shark attack data to uncover patterns and trends that can provide valuable insights. By utilizing Python and various data analysis tools, the project focuses on data cleaning, visualization, and reporting to better understand the factors influencing shark attacks. Key findings include seasonal trends, geographic hotspots, and correlations with environmental variables. The project leverages Pandas for data manipulation, Matplotlib and Seaborn for visualization, and Jupyter Notebooks for an interactive analysis environment.
Tool used: Python, Pandas, Matplotlib, Seaborn.
Numerical approximation of differential equations with neural networks and their implementation in Python
This project focuses on the numerical approximation of ordinary differential equations (ODEs) and partial differential equations (PDEs) using neural networks. The primary goal was to explore how neural networks, particularly deep learning models, can be applied to solve differential equations that describe complex systems, such as SIR model or the heat equation. The implementation was done using Python and PyTorch, leveraging the power of machine learning to provide accurate and efficient solutions to these problems.
Tool used: Python, Matplotlib, PyTorch, Numpy.

