Thanks to visit codestin.com
Credit goes to vercel.com

  • Redesigned search and filtering for runtime logs

    The Runtime Logs search bar in your project dashboard has been redesigned to make filtering and exploring your logs faster and more intuitive.

    • Structured filters. When you type a filter like level:error or status:500, the search bar parses it into a visual pill you can read at a glance and remove with a click. Complex queries with multiple filters become easy to scan and edit without retyping anything

    • Smarter suggestions. As you type, the search bar suggests filter values based on your actual log data. Recent queries are saved per-project and appear at the top, so you can rerun common searches without retyping them

    • Better input handling. The search bar validates your filters as you type and flags errors with a tooltip so you can fix typos before running a search. Pasting a Vercel Request ID automatically converts it into a filter

    These improvements are available now in your project dashboard. Learn more about runtime logs.

  • Automatic build fix suggestions with Vercel Agent

    You can now get automatic code-fix suggestions for broken builds from the Vercel Agent, directly in GitHub pull request reviews or in the Vercel Dashboard.

    When the Vercel Agent reviews your pull request, it now scans your deployments for build errors, and when it detects failures it automatically suggests a code fix based on your code and build logs.

    Vercel Agent - Automatic code suggestion on GitHub pull requestVercel Agent - Automatic code suggestion on GitHub pull request
    Vercel Agent - Automatic code suggestion on GitHub pull request

    In addition, Vercel Agent can automatically suggest code fixes inside the Vercel dashboard whenever a build error is detected, and suggests a code change to a GitHub Pull Request for review before merging with your code.

    Vercel Agent - Build fix suggestions on the Vercel DashboardVercel Agent - Build fix suggestions on the Vercel Dashboard
    Vercel Agent - Build fix suggestions on the Vercel Dashboard

    Get started with Vercel Agent code review in the Agent dashboard, or learn more in the documentation.

  • Automated security audits now available for skills.sh

    Skills on the skills.sh now have automated security audits to help developers use skills with confidence.

    Working with our partners Gen, Socket, and Snyk, these independent security reports allow us to rapidly scale and audit over 60,000 skills and counting.

    Skills.sh provides greater ecosystem support with:

    • Transparent results: Security audits appear publicly on each skill's detail page.

    • Leaderboard protection : Skills flagged as malicious are automatically hidden from the leaderboard and search results. If you navigate directly to a flagged skill, a warning note appears before installation.

    • Security validation: As of [email protected], adding skills clearly displays audit results and risk levels before installation.

    Learn more at skills.sh.

  • Recraft V4 on AI Gateway

    Recraft V4 is now available on AI Gateway.

    A text-to-image model built for professional design and marketing use cases, V4 was developed with input from working designers. The model has improvements with photorealism, with realistic skin, natural textures, and fewer synthetic artifacts. It also produces images with clean lighting and varied composition. For illustration, the model can generate original characters with less predictable color palettes.

    There are 2 versions:

    • V4: Faster and more cost-efficient, suited for everyday work and iteration

    • V4 Pro: Generates higher-resolution images for print-ready assets and large-scale use

    To use this model, set model to recraft/recraft-v4-pro or recraft/recraft-v4 in the AI SDK:

    import { generateImage } from 'ai';
    const result = await generateImage({
    model: 'recraft/recraft-v4',
    prompt:
    `Product photo of a ceramic coffee mug on a wooden table,
    morning light, shallow depth of field.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Vercel Sandbox snapshots now allow custom retention periods

    Snapshots created with Vercel Sandbox now have configurable expiration, instead of the previous 7 days limit, along with higher defaults.

    import { Sandbox } from '@vercel/sandbox';
    import ms from 'ms';
    const sandbox = Sandbox.create();
    sandbox.snapshot({ expiration: ms('1d') })

    The expiration can be configured between 1 day to infinity. If not provided, the default snapshot expiration is 30 days.

    You can also configure this in the CLI.

    # Create a snapshot of a running sandbox
    sandbox snapshot sb_1234567890 --stop
    # Create a snapshot that expires in 14 days
    sandbox snapshot sb_1234567890 --stop --expiration 14d
    # Create a snapshot that never expires
    sandbox snapshot sb_1234567890 --stop --expiration 0

    Read the documentation to learn more about snapshots.

  • Claude Sonnet 4.6 is live on AI Gateway

    Claude Sonnet 4.6 from Anthropic is now available on AI Gateway with the 1M token context window.

    Sonnet 4.6 approaches Opus-level intelligence with strong improvements in agentic coding, code review, frontend UI quality, and computer use accuracy. The model proactively executes tasks, delegates to subagents, and parallelizes tool calls, with MCP support for scaled tool use. As a hybrid reasoning model, Sonnet 4.6 delivers both near-instant responses and extended thinking within the same model.

    To use this model, set model to anthropic/claude-sonnet-4.6 in the AI SDK. This model supports effort and thinking type adaptive:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'anthropic/claude-sonnet-4.6',
    prompt:
    `Build a dashboard component from this spec with
    responsive layout, dark mode support, and accessibility.`,
    providerOptions: {
    anthropic: {
    effort: 'medium',
    thinking: { type: 'adaptive' },
    },
    },
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • Improved streaming runtime logs exports

    With runtime logs, you can view and export your logs. Exports now stream directly to the browser - your download starts immediately and you can continue to use the Vercel dashboard while the export runs in the background. This eliminates the need to wait for large files to buffer.

    Additionally, we've added two new options: You can now export exactly what's on your screen or all requests matching your current search.

    All plans can export up to 10,000 requests per export, and Observability Plus subscribers can export up to 100,000 requests.

    Exported log data is now indexed by request to ensure consistency with the Runtime Logs dashboard interface. Export limits are now applied by request to ensure that the exported data matches the filtered requests shown on the dashboard.

    Learn more about runtime logs.

  • Qwen 3.5 Plus is on AI Gateway

    Qwen 3.5 Plus is now available on AI Gateway.

    The model comes with a 1M context window and built-in adaptive tool use. Qwen 3.5 Plus excels at agentic workflows, thinking, searching, and using tools across multimodal contexts, making it well-suited for web development, frontend tasks, and turning instructions into working code. Compared to Qwen 3 VL, it delivers stronger performance in scientific problem solving and visual reasoning tasks.

    To use this model, set model to alibaba/qwen3.5-plus in the AI SDK:

    import { streamText } from 'ai';
    const result = streamText({
    model: 'alibaba/qwen3.5-plus',
    prompt:
    `Analyze this UI mockup, extract the design system,
    and generate a production-ready React component
    with responsive breakpoints and theme support.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.