Thanks to visit codestin.com
Credit goes to github.com

Skip to content

keremk/renku

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

I want you to help me have a first version of the @README.md file. I have already written some things but they are incomplete. The README should not be overly long though. Do not rely on the docs in the archived folder, they are outdated.

Renku

Renku is an incremental build system for AI-generated media assets.

Today, building anything longer than a few seconds with AI usually means juggling multiple tools: one or more model APIs for each type of asset, a separate editor for the timeline or layout, and a lot of copy–paste.

If you want a one-minute piece made of six 10-second segments, you typically have to:

  • craft or tweak prompts for each segment,
  • generate images or video for each segment,
  • generate narration audio for each segment,
  • pick or generate music and effects,
  • import everything into a video editor and line it up by hand,

You can handcraft a script but then, when you don't like the outcome and want to tweak that, then you end up regenerating all of the artifacts by re-running that script.

Renku replaces that manual, ad-hoc process or a handcrafted automation script with an incremental build system.

You define a blueprint (a “workflow pipeline”) describing the flow using producers (prompt generators, asset generators, composition/export etc.). Then you can use this pipeline and initiate generation using a single inquiry prompt. Renku runs the pipeline, caches their artifacts, and only re-runs the parts of the graph that are affected when you change a prompt, a parameter, or upstream data.

What Renku is good for

Renku is meant for any workload where you need many generated assets that depend on each other, for example:

  • sequences of clips for long- or short-form video,
  • narrated image sequences,
  • multi-part content for lessons or courses,
  • any other pipeline where prompts, assets, and composed outputs are linked.

The system is extensible: today you might target videos, but you can add stages and plugins to target slide decks, print-style layouts, or other composed formats, using the same incremental build model.

Agents and tools

Renku can also be used as a tool from agentic systems (for example, as a skill in Claude Code).

Instead of having an agent manually orchestrate dozens of individual generations and keep track of all the intermediate assets, you prompt it to use the Renku skill:

“Run the video story blueprint and tell me about the Battle of Waterloo in 30 seconds”

The blueprint encodes the multi-step, multi-asset logic; Renku executes it incrementally.

Table Of Contents

Insert a table of contents here

Installation

Quick Start

High-level concepts (sketch)

Adjust this to match the actual implementation.

  • Blueprint A declarative description of stages, their inputs/outputs, and dependencies.

  • Layer A grouping of tasks that can run in parallel together

  • Artifacts Files or structured data produced by stages (scripts, audio files, image sequences, JSON, etc.).

  • Incremental builds Renku tracks which artifacts come from which stages and inputs. When you change something, it only rebuilds the affected stages.

Usage Scenarios

CLI Reference

Provide a link to cli_interface

Blueprint Development

Architecture & Concepts

How To Guide

Provide a link ...(authoring blueprints)

Building Custom Producers

Currently this is not yet open for development, but you can take a look at the providers package on how the existing producers work.

About

An incremental build tool for AI generated media asset and prompt pipelines

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published