Recommendation Systems
Recommendation Systems
The cold start problem in recommendation systems arises when there is insufficient historical
interaction data for new users, new items, or even when launching a new system, making it
challenging to provide accurate recommendations [1] [2] [3] .
Key Strategies to Address the Cold Start Problem:
Content-Based Filtering:
For new users or items, recommendation systems rely on content features such as item
descriptions, attributes, or user profiles (e.g., demographics, interests). By matching user
profiles to item characteristics, the system can generate initial recommendations without
needing prior interaction data [4] [2] [1] .
Popularity and Trend-Based Recommendations:
Recommending popular or trending items is a common approach for new users, as these
items are generally well-received and require no personalization [4] .
Demographic and Metadata Utilization:
Systems can use available demographic data (age, gender, location) for users and
metadata (category, tags, description) for items to infer initial preferences and make
baseline recommendations [2] .
Hybrid Approaches:
Combining collaborative filtering with content-based methods (hybrid recommenders) helps
mitigate the disadvantages of relying solely on one technique, especially when interaction
data is sparse [5] .
Transfer Learning and Meta-Learning:
These advanced machine learning techniques leverage knowledge from existing users or
items to inform recommendations for new users or items. Meta-learning, for example, allows
the system to quickly adapt to new scenarios by updating model parameters with minimal
new data [4] [6] .
Active Learning and Preference Elicitation:
Systems may prompt new users to rate a few carefully selected items, maximizing the
information gained from each response to rapidly build a useful user profile [1] .
Social Network and External Data:
Allowing users to sign up with social logins or importing data from other platforms can
provide immediate insights into user preferences, helping to overcome the lack of historical
data [7] .
A/B Testing and Iterative Refinement:
Continuous experimentation with different cold start strategies helps identify the most
effective methods for a given system and user base [4] .
Summary Table: Cold Start Strategies
Scenario Main Approaches
New Item Metadata enrichment, content similarity, transfer learning, popularity-based recommendations
New Hybrid approaches, initial random/popular recommendations, active learning, transfer from
System similar domains
By combining these strategies, recommendation systems can provide relevant suggestions even
when historical data is limited, gradually improving personalization as more interactions are
collected [4] [1] [2] .
⁂
Step Description
Create fake profiles Assign maximum ratings to popular items and the target item
Influence outcomes Target item appears more often in recommendations to genuine users
Bandwagon attacks exploit the popularity bias in recommendation systems, making them a
significant security concern [8] [9] .
⁂
Key Distinctions
Knowledge-based recommenders excel where purchases are rare, stakes are high, or
users have specific, complex requirements. They use domain knowledge and explicit user
input rather than learning from past behavior [19] [20] [21] .
Collaborative filtering relies on the wisdom of the crowd, using user-item interaction
patterns to recommend items. It performs best with abundant user data but fails with new
users or items (the cold start problem) [22] [23] [24] .
Content-based recommenders focus on matching item features to user preferences,
making them suitable when item attributes are rich and well-defined, but they may struggle
to suggest novel items outside a user's established interests [22] [23] [24] .
"Knowledge-based, content-based, and collaborative filtering are three main
approaches in recommendation systems, each designed to cater to specific contexts
and data availability" [19] [20] [21] .
In summary, knowledge-based systems are fundamentally different because they do not rely on
user history or item similarity, but instead use explicit knowledge and constraints to generate
recommendations, making them ideal for domains with sparse or no user interaction data.
⁂
How It Works
1. User-Item Matrix:
All users’ interactions (ratings, purchases, etc.) with items are stored in a matrix, where each
row is a user and each column is an item [30] [31] .
2. Similarity Calculation:
For a target user, the system computes similarity scores (using measures like cosine
similarity or Pearson correlation) between this user and all other users based on their item
ratings or interactions [31] [32] .
3. Neighbor Selection:
The system selects the top-k users (nearest neighbors) who are most similar to the target
user [30] [31] .
4. Recommendation Generation:
Items that the nearest neighbors have liked or rated highly-but the target user has not
interacted with-are recommended to the target user. The predicted preference for an item is
often a weighted average of the neighbors’ ratings, weighted by similarity [30] [31] .
Example
Suppose we have the following user-item ratings matrix for four users (U1, U2, U3, U4) and four
movies (M1, M2, M3, M4):
M1 M2 M3 M4
U1 5 3 ? 1
U2 4 2 4 1
U3 5 3 5 2
U4 1 5 1 4
Summary
User-based Nearest Neighbor recommends items to a user by finding users with similar tastes
and leveraging their preferences to make predictions. This method is widely used in
collaborative filtering for tasks like movie, product, or book recommendations [30] [31] [32] .
⁂
Architecture of Content-Based Recommendation Systems
Content-based recommendation systems suggest items to users by analyzing the features of
items and matching them with the user’s preferences, which are inferred from their previous
interactions or explicitly provided data.
Typical Architecture
The architecture of a content-based recommender system generally consists of the following
components:
1. Data Layer
Item Data: Contains features or attributes of items (e.g., genre, description, keywords
for movies; price, category, description for products) [33] [34] .
User Data: Stores user profiles, preferences, and interaction histories (e.g., items
viewed, liked, or rated) [33] [35] [34] .
2. Feature Extraction and Representation
Extracts meaningful features from item data (e.g., using NLP for text, image analysis for
pictures) [36] [35] .
Represents items and user preferences as feature vectors in a common space [36] [37] .
3. User Profile Construction
Builds a user profile by aggregating features from items the user has interacted with or
explicitly liked [35] [37] .
The user profile is typically a weighted vector reflecting the importance of various
features to the user [35] [37] .
4. Similarity Computation
Calculates similarity between the user profile and item profiles using metrics like cosine
similarity or dot product [38] [35] [37] .
5. Recommendation Engine
Ranks all items based on similarity scores and recommends the most similar items to the
user [38] [35] [39] .
6. Feedback and Update
Updates user profiles and refines recommendations as users interact with more
items [35] .
Visual Workflow
Summary Table
Component Description
Content-based systems are highly personalized and effective when item features are rich and
user preferences are clear, making them ideal for domains like movies, news, and e-commerce
products [35] [40] [37] .
⁂
Example Application
In a movie recommendation system:
Metadata similarity: Cosine similarity between genre vectors.
Visual similarity: Feature extraction from posters using deep learning, followed by
clustering or similarity computation.
Review similarity: Text vectorization of user reviews and cosine similarity to compare
sentiment or themes [41] .
Summary Table
Method Typical Use Case Data Type
How It Works
Feature Extraction:
Extract content-based features (e.g., item metadata, user profiles) and collaborative
features (e.g., user-item rating patterns, average ratings, user similarity metrics).
Feature Augmentation:
Merge these features into a single vector for each user-item pair.
Model Training:
Use a machine learning algorithm (such as decision trees, logistic regression, or neural
networks) to learn from this enriched feature set and predict user preferences or ratings.
Recommendation Generation:
The trained model predicts which items each user is likely to prefer, based on the combined
features.
Example
Suppose you are building a movie recommendation system:
Content-based features:
Movie genres (Action, Comedy, Drama, etc.)
Director, actors, release year
Movie description (converted to TF-IDF vector)
Collaborative features:
Average rating of the movie
Number of users who rated the movie
Similarity score between the target user and other users who liked the movie
Feature combination:
For each user-movie pair, create a feature vector that includes both the content-based and
collaborative features.
Model application:
Use a classifier or regression model to predict the likelihood that the user will like or rate the
movie highly, and recommend the top-ranked movies.
"Using feature combination as a Hybrid Recommender engine, you can easily achieve
the content/collaborative merger. This is done by basically treating the collaborative
information as simple additional feature data associated with each example and use
content-based techniques over this augmented data set. For example, in an experiment,
in order to achieve higher precision rate than that achieved by just collaborative method,
inductive rule learner, Ripper, was applied to the task of recommending movies using
both user ratings and content features" [48] [49] [50] .
Advantages
Reduces sensitivity to the number of users who have rated an item.
Leverages strengths of both collaborative and content-based methods.
Can improve recommendation accuracy, especially in sparse data scenarios.
Summary Table
Step Description
Feature Extraction Gather content-based and collaborative features for each user-item pair
Model Training Train a machine learning model using the combined feature vectors
This approach is particularly effective in domains where both user behavior and item
characteristics are informative, such as movie, music, or product recommendation platforms.
⁂
Constraints:
: price ≤ max_price
: mileage ≤ max_mileage
: seats = num_seats
: color ∈ {Blue, White}
Domain-specific constraints can also be included, such as:
If the car is older than 4 years, a technical inspection within the last 6 months is
required [54] .
Example Solution
Given the above user requirements and a database of cars, the system would:
Select only those cars where price ≤ $15,000, mileage ≤ 60,000 km, seats = 5, and color is
Blue or White.
Apply additional domain rules, such as checking for a recent technical inspection if the car is
older than 4 years.
Return the list of cars that meet all these conditions.
Summary Table
Step Description
Conclusion
Constraint-based recommendation is represented as a CSP, where the goal is to find item
assignments that satisfy all user and domain constraints. The problem is solved by filtering items
through these constraints, ensuring recommendations are tailored to explicit user requirements
and domain logic [51] [54] [52] .
⁂
Accuracy
Accuracy measures how well the recommender predicts user preferences. It is often quantified
using metrics like Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Precision, and
Recall. High accuracy means the system effectively recommends items that users actually like or
engage with [55] [56] .
Coverage
Coverage assesses the proportion of items or users for which the system can make
recommendations. Item coverage reflects how many items appear in recommendations, while
user coverage indicates how many users receive meaningful suggestions. High coverage
ensures the system serves a broad range of users and items, not just the most popular ones [55] .
Novelty
Novelty measures how new or unexpected the recommended items are to the user. A system
with high novelty introduces users to items they haven’t seen or considered before, preventing
recommendation fatigue and keeping the experience fresh [55] [59] .
Serendipity
Serendipity goes beyond novelty by recommending items that are both unexpected and
pleasantly surprising. It aims to delight users with suggestions they would not have discovered
on their own but end up enjoying, enhancing engagement and satisfaction [55] [59] .
Diversity
Diversity evaluates how varied the recommendations are within a list. High diversity ensures that
recommendations are not too similar to each other, catering to different facets of a user’s
interests and reducing the risk of monotony [55] [59] .
Robustness
Robustness measures the system’s resilience to noise, adversarial attacks, or data manipulation
(e.g., shilling or bandwagon attacks). A robust system maintains performance and
recommendation quality even when faced with imperfect or malicious input [55] .
Stability
Stability refers to the consistency of recommendations over time or across similar user profiles.
Users expect that small changes in their behavior or profile should not lead to radically different
recommendations, which fosters trust and usability [55] .
Scalability
Scalability assesses how well the system performs as the number of users, items, or interactions
grows. A scalable recommender maintains responsiveness and quality even under heavy loads
or with massive datasets, which is critical for real-world deployment [55] .
"Aside from the well known goal of accuracy, other general goals include factors such as
diversity, serendipity, novelty, robustness, and scalability. Some of these goals can be
concretely quantified, whereas others are subjective goals based on user experience."
[55]
Error Metrics
Error metrics evaluate how accurately a recommender system predicts user preferences,
typically by comparing predicted ratings or scores to actual user feedback. These are crucial for
quantifying the prediction quality of algorithms, especially in rating-based systems.
Common Error Metrics:
Mean Absolute Error (MAE):
Measures the average absolute difference between predicted and actual ratings. It treats all
errors equally, providing a straightforward interpretation of overall prediction accuracy [60]
[61] [62] [63] [64] [65] .
Summary Table
Metric Type Examples What It Measures
In practice, both error and decision support metrics are used together to comprehensively
evaluate and improve recommender systems, ensuring both accurate predictions and
effective user decision support. [60] [61] [62] [63] [64] [65] [67]
⁂
Mathematical Definition
If $ X $ is a random vector with elements $ X_1, X_2, ..., X_n $, the covariance matrix $ \Sigma $
is defined as:
Properties
Symmetry: The covariance matrix is always symmetric, meaning $ \Sigma_{ij} = \Sigma_{ji}
$.
Diagonal Elements: The diagonal entries ($ \Sigma_{ii} $) represent the variances of each
element.
Off-diagonal Elements: The off-diagonal entries ($ \Sigma_{ij} $, $ i \neq j $) represent the
covariances between different elements.
Applications in Recommendation Systems
In recommender systems, covariance matrices are used to capture relationships between users
or items. For example, a user-user covariance matrix can describe how users’ preferences
change together, providing a more nuanced understanding than simple similarity measures. This
approach can help address issues like data sparsity and changing user interests, and can
improve both the diversity and precision of recommendations [69] [70] .
Matrix factorization techniques in collaborative filtering sometimes assume that latent features
are uncorrelated (diagonal covariance), but more advanced methods use full or sparse
covariance matrices to better capture the structure of user or item relationships, which can also
help prevent overfitting and reflect real-world semantics [70] .
Summary Table
Entry Type Meaning
In summary, the covariance matrix is a key tool for understanding relationships within
multivariate data, with important applications in statistics, machine learning, and recommender
systems [68] [69] [70] .
⁂
Key Features
Interactive Dialogue: The system asks questions or prompts to gather user preferences.
Dynamic Preference Elicitation: User preferences are updated in real time as the
conversation progresses.
Personalized Recommendations: Suggestions become more accurate as the system learns
more about the user's needs.
Natural Language Understanding: Many modern systems use NLP to interpret user input
and respond conversationally.
Example: Conversational Recommendation for Restaurant Booking
Scenario:
A user wants to find a restaurant for dinner using a chatbot-based recommender.
Sample Conversation:
System: Hi! What type of cuisine are you interested in tonight?
User: I’d like Italian food.
System: Great! Do you have a price range in mind?
User: Something moderate, not too expensive.
System: Do you prefer a casual or formal setting?
User: Casual is fine.
System: Here are a few Italian restaurants nearby that are casual and moderately priced. Would
you like to see more details or make a reservation?
How it works:
The system starts with broad questions and narrows down options based on the user's
responses.
Each user answer helps the system filter the database and refine its recommendations.
The process continues until the user is satisfied or makes a selection.
Advantages
Handles Complex Preferences: Can accommodate multi-faceted or evolving user needs.
Improves User Satisfaction: Users feel more in control and engaged.
Reduces Choice Overload: By narrowing options step-by-step, the system prevents
overwhelming the user.
Applications
E-commerce shopping assistants
Travel and hotel booking bots
Movie or content recommenders in streaming platforms
Personalized healthcare or financial advice tools
In summary:
Conversational recommendation systems use interactive, dialogue-driven processes to elicit user
preferences and provide tailored suggestions. By mimicking a human-like conversation, they
offer a more engaging and effective way to navigate complex decision spaces, leading to higher
user satisfaction and better recommendation outcomes.
Advantages and Disadvantages of Content-Based Recommendation Systems
Advantages
Personalized Recommendations
Content-based systems tailor suggestions to each individual user by analyzing their explicit
preferences and past behavior, resulting in highly relevant recommendations [71] [72] [73] .
No Need for Other Users’ Data
These systems operate independently of other users’ interactions or ratings, making them
well-suited for environments with limited user data or when privacy is a concern [71] [72] [74] .
Effective for Niche Interests
Content-based filtering can recommend niche or unique items that may not be popular
among the general user base but align closely with a specific user’s interests [75] [71] [72] .
Handles New Items Well
Since recommendations are based on item features rather than user ratings, new items can
be recommended immediately without waiting for user feedback, addressing the "cold
start" problem for items [71] [76] [72] [77] .
Transparency and Explainability
The system can explain recommendations based on item attributes (e.g., "recommended
because it shares the same genre or author"), which increases user trust and
understanding [72] [78] .
Scalability
Content-based systems can scale to large numbers of users because each user’s
recommendations are computed independently, without the need to compare across all
users [71] [74] .
Simplicity in Implementation
These systems are generally simpler to implement and maintain than collaborative filtering,
as they primarily require accurate item attribute assignment [72] .
Disadvantages
Limited Novelty and Diversity (Overspecialization)
Recommendations tend to be very similar to what the user has already seen or liked, which
can lead to a lack of diversity and surprise in suggestions (the "filter bubble" effect) [71] [76]
[72] [73] .
In summary:
Content-based recommendation systems excel at delivering personalized, transparent, and
niche recommendations without relying on other users’ data, and they handle new items
efficiently. However, they may suffer from overspecialization, depend heavily on high-quality
feature engineering, and struggle with novelty, diversity, and new user scenarios [75] [71] [76] [72]
[73] .
Latent Factors
Latent factors are hidden features inferred from the data, not explicitly labeled or observed.
In the context of movies, for example, latent factors might capture dimensions such as
genre preference, action vs. romance, or affinity for certain actors, even if these are not
directly specified in the data.
The model learns these factors by minimizing the difference between the actual ratings and
the predicted ratings (often using techniques like Singular Value Decomposition, SVD) [80]
[81] .
Example
Suppose a movie recommender system has thousands of users and movies, but only a small
fraction of possible ratings are filled in. Matrix factorization will:
Assign each user a vector (e.g., [0.2, -0.5, 1.3]) and each movie a vector (e.g., [0.7, 0.1,
-1.2]) in a latent space.
The predicted rating for a user-movie pair is the dot product of their vectors.
The latent factors might correspond to abstract concepts like "preference for action
movies" or "liking for comedies," even if these aren't explicitly labeled.
Summary Table
Term Description
Matrix Factorization Decomposes user-item interaction matrix into user and item latent factor matrices
Latent Factors Hidden features inferred from data, capturing underlying patterns in preferences
In summary:
Matrix factorization is a powerful technique that uncovers latent factors representing users and
items, enabling accurate and scalable recommendations by modeling complex, hidden
relationships in the data [79] [80] [81] .
⁂
Note on Pipelined Hybridization Design
Pipelined hybridization design is a strategy in hybrid recommender systems where multiple
recommendation algorithms are organized in a sequential pipeline. In this approach, the output
of one recommender serves as the input (or part of the input) for the next recommender in the
sequence. This design enables each stage to refine, filter, or enrich the recommendations
produced by the previous stage, often leading to more precise and contextually relevant
results [87] [88] .
Key Characteristics
Sequential Processing: Each component in the pipeline processes the data or
recommendations from the previous component, rather than operating independently or in
parallel.
Specialization: Different recommenders in the pipeline can focus on specific aspects of the
recommendation task (e.g., filtering by content first, then ranking by collaborative filtering).
Refinement: Later stages can refine, re-rank, or further restrict the candidate items
identified by earlier stages, resulting in higher precision.
Example
Online Course Recommendation (Cascade Approach):
Stage 1: A content-based recommender filters courses based on user-specified topics and
prerequisites.
Stage 2: Collaborative filtering takes this filtered list and re-ranks courses based on ratings
from similar users.
The user receives a highly relevant, personalized shortlist of courses [89] .
Advantages
Precision: By sequentially narrowing down options, pipelined designs can produce highly
relevant recommendations.
Modularity: Each stage can be optimized or replaced independently, allowing flexible
system evolution.
Complex Task Handling: Suitable for complex recommendation scenarios where different
algorithms excel at different subtasks.
Disadvantages
Complexity: Requires careful design to ensure compatibility and efficiency between stages.
Potential Information Loss: Each stage may discard potentially relevant items if not
carefully tuned.
Higher Latency: Sequential processing can increase response time compared to parallel
approaches [88] .
Summary Table
Stage Description
In summary:
Pipelined hybridization design leverages the strengths of multiple recommender algorithms in a
sequential manner, enabling each to contribute its unique capabilities to the final
recommendation. This approach is especially valuable for tasks requiring multi-stage filtering,
refinement, or modeling, but demands thoughtful integration and system design for optimal
results [87] [88] .
⁂
1. https://en.wikipedia.org/wiki/Cold_start_(recommender_systems)
2. https://www.linkedin.com/pulse/cold-start-problem-recommender-systems-strategies-iain-brown-ph-
d--4lsce
3. https://www.tredence.com/blog/solving-the-cold-start-problem-in-collaborative-recommender-system
s
4. https://thingsolver.com/blog/the-cold-start-problem/
5. https://vinija.ai/recsys/cold-start/
6. https://www.expressanalytics.com/blog/cold-start-problem/
7. https://www.freecodecamp.org/news/cold-start-problem-in-recommender-systems/
8. https://www.studocu.com/in/document/anna-university/recommender-system/unit-iv-unit-4/89273415
9. https://grouplens.org/beyond2005/full/burke.pdf
10. https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID4712941_code6471361.pdf?abstractid=4712941
11. https://www.ijircst.org/DOC/3-major-challenges-of-recommender-system-and-related-solutions.pdf
12. https://www.ijltet.org/wp-content/uploads/2014/10/21.pdf
13. https://ijiet.com/wp-content/uploads/2013/03/1.pdf
14. https://www.sciencedirect.com/topics/computer-science/hybrid-recommendation
15. https://www.semanticscholar.org/paper/a24cf27c8183f0fa0dcffc9ac5643382d20e2dbd
16. https://en.wikipedia.org/wiki/Knowledge-based_recommender_system
17. https://www.fi.muni.cz/~xpelanek/PV254/slides/other-techniques.pdf
18. https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2024.1304439/full
19. https://arxiv.org/pdf/2206.02631.pdf
20. https://www.semanticscholar.org/paper/a6d41840e9c906130e8ab6a1965f9ec359596046
21. https://www.semanticscholar.org/paper/148d4d42447f32d0d80e189c96963174853be1ab
22. https://arxiv.org/vc/arxiv/papers/1402/1402.2145v1.pdf
23. https://arxiv.org/pdf/1912.08932.pdf
24. https://arxiv.org/pdf/1711.04101.pdf
25. http://arxiv.org/pdf/2308.04247.pdf
26. https://arxiv.org/pdf/2503.21188.pdf
27. https://arxiv.org/pdf/1807.11698.pdf
28. https://arxiv.org/pdf/2302.02579.pdf
29. https://arxiv.org/pdf/1802.08452.pdf
30. https://www.encora.com/insights/recommender-system-series-part-2-neighborhood-based-collaborati
ve-filtering
31. https://aurigait.com/blog/recommendation-system-using-knn/
32. https://www.semanticscholar.org/paper/ca2771c4cd41873f54364661fdcc4d7927aed9d4
33. https://www.alibabacloud.com/blog/basic-concepts-and-architecture-of-a-recommender-system_596
642
34. https://www.algolia.com/blog/ai/the-anatomy-of-high-performance-recommender-systems-part-iv
35. https://www.zevi.ai/blogs/what-is-a-content-based-recommendation-system-and-how-do-you-build-o
ne
36. https://spotintelligence.com/2023/11/15/content-based-recommendation-system/
37. https://developers.google.com/machine-learning/recommendation/content-based/basics
38. https://media.neliti.com/media/publications/467888-content-based-recommender-system-archite-15a
dc9ac.pdf
39. https://www.engati.com/glossary/content-based-filtering
40. https://www.stratascratch.com/blog/step-by-step-guide-to-building-content-based-filtering/
41. https://arxiv.org/pdf/2212.00139.pdf
42. https://www.semanticscholar.org/paper/65b4d7e619f3b11074edc6cd708962a5e4535649
43. http://arxiv.org/pdf/2404.11818.pdf
44. https://arxiv.org/pdf/1809.07053.pdf
45. https://www.semanticscholar.org/paper/6bc3c15139b7a919e2461d425ffb83d51599a604
46. https://arxiv.org/abs/2008.07702
47. https://arxiv.org/pdf/2008.00202.pdf
48. https://bluepiit.com/blog/hybrid-recommender-systems
49. https://www.slideshare.net/slideshow/unit-iv-knowledge-and-hybrid-recommendation-systempdf/2670
37336
50. https://marketsy.ai/blog/hybrid-recommender-systems-beginners-guide
51. https://web-ainf.aau.at/pub/jannach/files/BookChapter_Constraint-BasedRS_2015.pdf
52. https://scholarspace.manoa.hawaii.edu/bitstreams/cc2753a1-fa6a-4f6a-8f35-a4275214816d/download
53. https://www.igi-global.com/article/datatourist/276775
54. https://arxiv.org/pdf/2307.10702.pdf
55. https://www.linkedin.com/pulse/recommender-system-evaluation-goals-part-i-sergey-vasilyev
56. https://milvus.io/ai-quick-reference/what-are-the-key-metrics-for-evaluating-recommender-systems
57. http://arxiv.org/pdf/1209.1983.pdf
58. https://arxiv.org/pdf/2402.04457.pdf
59. https://www.evidentlyai.com/ranking-metrics/evaluating-recommender-systems
60. https://www.evidentlyai.com/ranking-metrics/evaluating-recommender-systems
61. https://neptune.ai/blog/recommender-systems-metrics
62. https://milvus.io/ai-quick-reference/what-are-the-key-metrics-for-evaluating-recommender-systems
63. https://towardsdatascience.com/evaluation-metrics-for-recommendation-systems-an-overview-71290
690ecba/
64. https://arxiv.org/html/2312.16015v2
65. https://www.educative.io/answers/what-are-the-evaluation-metrics-for-recommendation-systems
66. https://archive.nyu.edu/bitstream/2451/14303/1/IS-98-17.pdf
67. https://aman.ai/recsys/metrics/
68. https://en.wikipedia.org/wiki/Covariance_matrix
69. https://onlinelibrary.wiley.com/doi/10.1155/2018/9740402
70. http://winsty.net/papers/scmf.pdf
71. https://www.engati.com/glossary/content-based-filtering
72. https://www.upwork.com/resources/what-is-content-based-filtering
73. https://www.linkedin.com/pulse/collaborative-filtering-vs-content-based-recommender-aaz-el-aarab
74. https://www.turing.com/kb/content-based-filtering-in-recommender-systems
75. https://developers.google.com/machine-learning/recommendation/content-based/summary
76. https://www.ibm.com/think/topics/content-based-filtering
77. https://www.ijltet.org/wp-content/uploads/2014/10/21.pdf
78. https://arxiv.org/pdf/2405.05562.pdf
79. https://arxiv.org/pdf/2308.04661.pdf
80. https://arxiv.org/pdf/2203.11026v1.pdf
81. https://www.semanticscholar.org/paper/d78d78cc48604d7b9f69596b6a704459b7d1ef58
82. https://arxiv.org/pdf/1807.05515.pdf
83. http://arxiv.org/pdf/1405.0770.pdf
84. https://arxiv.org/pdf/2112.03089.pdf
85. http://arxiv.org/pdf/1210.5631.pdf
86. https://www.semanticscholar.org/paper/1ddc1105217abed5fab957c06d4efdd8645a7392
87. https://www.ksi.mff.cuni.cz/~peska/vyuka/nswi166/old/nswi166_06_HybridRecsysIntro.pdf
88. https://csse.szu.edu.cn/staff/panwk/recommendation/MISC/HybridRecommendation.pdf
89. https://www.slideshare.net/slideshow/unit-iv-knowledge-and-hybrid-recommendation-systempdf/2670
37336