Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

kylelai1
Copy link
Collaborator

@kylelai1 kylelai1 commented Sep 8, 2025

Proposed changes

This PR adds the atlas-list-performance-advisor tool to the MCP server, which retrieves the following performance advisor recommendations from the admin API: index suggestions, drop index suggestions, schema suggestions, slow query logs.

This PR merges the changes into the atlas-list-performance-advisor-tool branch.

Testing

  • Manually tested that the MCP server is able to retrieve performance advisor suggestions.

Checklist

@kylelai1 kylelai1 marked this pull request as ready for review September 8, 2025 20:24
@kylelai1 kylelai1 requested a review from a team as a code owner September 8, 2025 20:24
@kylelai1 kylelai1 requested a review from blva September 8, 2025 20:24
@kylelai1 kylelai1 changed the title Adds the atlas-list-performance-advisor base tool feat: Adds the atlas-list-performance-advisor base tool Sep 8, 2025
Copy link
Collaborator

@nirinchev nirinchev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did a quick pass - overall, looks reasonable, but I'm worried that it might not be too LLM-friendly. I suggest testing it thoroughly with different agents/models and confirming it's outputting meaningful insights.

.array(z.nativeEnum(PerformanceAdvisorOperation))
.describe("Operations to list performance advisor recommendations"),
since: z.number().describe("Date to list slow query logs since").optional(),
processId: z.string().describe("Process ID to list slow query logs").optional(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this something we expect the LLM to know how to get? As far as I can tell, you get it by calling atlas processes list but we don't have any tools that mirror that behavior in the MCP server.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since it's an optional, the LLM does not need to pass this in. It's also noted that it's only for slow query logs. If the processId is not passed in, we use inspect cluster to get the hostname + port that can be used for the process ID, which is handled by the performance advisor util functions that the atlas-list-performance-advisor tool calls.

I also think that when we do more manual testing in the "QA" phase of adding the performance advisor tool, we can more thoroughly test prompts and see how the LLM will prompt the user for more data.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we know the model has no way of figuring out this processId, what's the point of exposing it as an argument?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I might be approaching this incorrectly. This is something that if the user is able to provide, then it'll be passed in. Otherwise, we retrieve the process ID thru the tool. If this is something that the LLM is unable to figure out and we rely on the user to provide the processId, would this be convention to leave out the processId argument?

operations: z
.array(z.nativeEnum(PerformanceAdvisorOperation))
.describe("Operations to list performance advisor recommendations"),
since: z.number().describe("Date to list slow query logs since").optional(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be z.date instead? Does the LLM do a good job at converting dates to unix epoch?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Yes, the LLM does a good job of converting dates to unix epoch. I've manually tested just telling the LLM to get metrics for the past X hours for example, and it handled that well.

I can change this to z.date


// If operations is empty, get all performance advisor recommendations
// Otherwise, get only the specified operations
const operationsToExecute = operations.length === 0 ? Object.values(PerformanceAdvisorOperation) : operations;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we mark operations as optional and provide a default instead? Right now there's nothing to hint to the LLM it could provide an empty array here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is intended behavior here, so providing an empty array is ok. The LLM will ask the user which operations if we don't specify any, and it was discussed with product to just list all the PA suggestions if none were given here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand - right now this is a required argument, which tells the LLM to try and figure out a value for it - either by asking the user or hallucinating something. Instead, if we specify it as an optional argument with a default value, we still achieve the goals we set for ourselves in the PD, but we're also more clearly communicating the behavior of the server.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you mean. There's nothing to suggest to the LLM that an empty array is ok to pass in here.
We can provide a default which is all operations in an array, and the LLM will be able to figure out that the default will then be all operations.


try {
if (operationsToExecute.includes(PerformanceAdvisorOperation.SUGGESTED_INDEXES)) {
const { suggestedIndexes } = await getSuggestedIndexes(this.session.apiClient, projectId, clusterName);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably not super critical, but right now, all of these async operations are evaluated sequentially, which means that we need to wait for one to finish before starting the next one. Instead, it would be a good idea to run them in parallel.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will change this one!

Comment on lines 90 to 92
return {
content: [{ type: "text", text: JSON.stringify(data, null, 2) }],
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should wrap the response in formatUntrustedData to avoid injection attacks where someone creates a slow query that contains llm instructions. Also, it might be helpful to give hints to the llm what the different fields in the json data represent and how those can be used.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'll go ahead and make this change. I'll see how other tools use this.


interface DropIndexSuggestion {
accessCount?: number;
index?: Array<{ [key: string]: 1 | -1 }>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this type definition true? Would PA not suggest dropping geo or text indexes?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me look into this. I used the Open API definitions for the Atlas Admin API, but it would be weird to not include text indexes to drop suggestions for example (same for index creation suggestions!)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read some of the code where we return the index suggestions in the performance advisor, and there's a check to skip complex index types, which include geospatial, text, and hashed indexes. I've confirmed this with Frank (intel PM) to make sure.

Comment on lines 55 to 71
type SchemaTriggerType =
| "PERCENT_QUERIES_USE_LOOKUP"
| "NUMBER_OF_QUERIES_USE_LOOKUP"
| "DOCS_CONTAIN_UNBOUNDED_ARRAY"
| "NUMBER_OF_NAMESPACES"
| "DOC_SIZE_TOO_LARGE"
| "NUM_INDEXES"
| "QUERIES_CONTAIN_CASE_INSENSITIVE_REGEX";

type SchemaRecommedationType =
| "REDUCE_LOOKUP_OPS"
| "AVOID_UNBOUNDED_ARRAY"
| "REDUCE_DOCUMENT_SIZE"
| "REMOVE_UNNECESSARY_INDEXES"
| "REDUCE_NUMBER_OF_NAMESPACES"
| "OPTIMIZE_CASE_INSENSITIVE_REGEX_QUERIES"
| "OPTIMIZE_TEXT_QUERIES";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to translate these to something the LLM would have an easier time interpreting?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added maps where we can get readable descriptions so that the LLM understands this better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants