Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Type deduction for auto for templated expression declarations #257

@EienMiku

Description

@EienMiku

Is your feature request related to a problem? Please describe.
When declaring a variable with auto (or decltype(auto), auto&&, etc.), if the initializing expression is templated, most language servers (e.g. clangd, Intellisense) will not deduce the variable’s type. This makes LSP features (hints, completions, navigation) poor when working with generic libraries.

For example:

template <class T>
struct x {};

template <class T>
struct y {
    void f() {
        auto w = x<T>{};
    }
};

In this case, neither w nor auto gets its type deduced by existing language servers.

Another complicated case is:

template <class T, int V>
struct x {
    int value{};
    bool b{};

    static auto f() -> x {
        return {};
    }

    auto g() {
        auto v = x<T, 1>::f();
        
        v;
    }
};

This behavior is understandable because these deductions often happen at instantiation time; specializations and other template machinery can make an expression’s type differ from the primary template. However, if the language server does not attempt to deduce types, it cannot provide intelligent in-context features like hints, suggestions, or completions for those variables. That severely degrades the developer experience when working with generic libraries — it feels like editing code in a plain text editor.

Describe the solution you'd like

  1. clice should perform a best‑effort, non‑specialized type inference and present results when feasible, prioritizing deductions from concepts/requires clauses.
  2. Always label inferred results with their source/confidence, e.g. inferred (non-specialized), inferred (from constraints), unknown — potential specializations exist, or specialized (visible).
  3. If inference conflicts with any visible user specialization or explicit instantiation, abandon the inference and return unknown — potential specializations exist. If the conflict is removed, re-run inference.
  4. Enforce resource limits (e.g. recursion/instantiation depth limit 1024) and time/budget caps; if limits are hit, return unknown — potential specializations exist.
  5. Rollout: implement conservative + constraint‑driven inference first; expose an opt‑in optimistic mode that gives best‑effort inferences with low‑confidence labels. Do not allow destructive refactors based solely on optimistic inferences.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions