Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[RFC] Asset Management and Pipeline #11

@kabergstrom

Description

@kabergstrom

I believe we can all agree that good tooling is essential for making users feel productive. Amethyst rests on a solid foundation of core tech but to really make a data-driven engine shine, solid editing and introspection tools are essential. I'd like to take a step closer to the Amethyst tooling vision and address the issue of assets, a common factor in all game editing tools.

If this seems like a good direction I'll be working on an RFC that will discuss how these tools may interact with assets once there is consensus on the problems to solve. This issue will initially contain some of my thoughts around problems and features I'd like to see in Amethyst with a suggested technical design coming in the RFC. Looking forward to your thoughts!

Background & Problem Statements

Asset lifecycle

Production-ready game engines generally have multi-stage asset pipelines. This means that an asset goes through multiple steps of processing and conversion before being loadable in the engine runtime. Usually there are three stages for an asset.

Input -> Edit -> Runtime

The input format is usually some form of common data interchange format like fbx, png, tga. The edit format is engine-specific and generally abstracts the input format as well as provides the possibility to add metadata to the asset. The runtime format is optimized for quick loading and can be adapted per platform or based on other build settings. There are multiple benefits to this separation.

  • By separating the specifics of an input format from the data it provides the engine becomes more extensible. PNG, TGA, JPG provide textures which are generally collections of two-dimensional color arrays. FBX, OBJ, GLTF provide 3D scene data. Support for more formats that provide similar data can be added more easily.
  • How assets are loaded at runtime can be configured during edit time which simplifies loading APIs significantly. Decisions like which compression format to use for a texture or whether mipmaps should be generated can be made with tools instead of cluttering game code.
  • Asset preparation passes such as mesh simplification or texture compression can be configured and performed at build time instead of during runtime.
  • Assets can be built with different configurations for different purposes. Textures can be compressed differently for phones or consoles to ensure a smaller artifact and shaders can be precompiled for specific platforms to save on startup time.
  • Custom processing steps can be implemented by users. This can be useful to automatically configure something per platform or fix up some quirk in a third-party exporting tool.

Build scalability through "pure functional asset pipelines"

It's nice when you don't have to wait for your computer. Even if you have 80GB of source data like some people. Frostbite may have spent a ton of time to make their build pipelines fast and Amethyst doesn't really need to do that yet, but the key take-away and the enabling feature of their fully parallel and cachable build pipeline is a deterministic mapping from source data to build artifact. This is what enables a bunch of caching tricks and studio-wide networked caching systems that can, combined with a few 40G switches, make your build times quite acceptable.

To clearly state the requirement, this means being able to deterministically hash a source asset and all variables that become an input to the build process and also have the build artifact be deterministic. This usually means hashing the asset's build config, target platform, compiler version, build code version, importer version, asset dependency hashes. Once you have calculated the hash, you can request the artifact off the network or a local cache.

NVMe m.2 drives are becoming cheaper and cheaper with multiple GBps in sequential read & write speeds. I'd be really glad if Amethyst could scale to the limitations of the hardware in its asset pipeline.

Concurrent Modifications

While I enjoy the Unix philosophy and admire the vision for Amethyst tools, there is a large difference between Unix command-line tools and game development tools. Game development tools are usually interactive and persistent in their display of information while Unix tools run once over a set of data, output a result and terminate. This difference results in one of the greatest challenges of computer science: cache invalidation!

Let's take a particle system editor as an example. Perhaps it edits an entity (prefab) asset. These assets are files on disk and presumably are not parsed and loaded each frame, thus there is a cached in-memory representation of the disk contents. If another tool, say a general component inspector of some kind, was to edit the same file concurrently there is a chance of inconsistent or lost data unless the tools exercised some form of cache coherence protocol.

Hot reload

Quick iteration times are key to staying competitive in the current game development market and hot reloading of as many asset types as possible is a large leap in the right direction. A running game should be able to pick up on asset changes from tooling over the network to enable hot reloading on non-development machines.

Search and query

Presumably many tools will want to search for specific files or attributes in files. This is useful when finding which assets reference a specific asset for example. Being able to find what you are looking for is amazing and if this can be provided as a common service to all tooling that'd presumably save a lot of time for both tool developers to avoid duplicated code and for users to find what they need. Attribute indexing would presumably require asset reflection of some sort.

Asset identifiers and Renaming or Moving

Users want to be able to rename or move assets without compromising their data and therefore references between assets cannot be based on paths, and preferably loading assets is not path-based either. Bitsquid's blog discusses this issue in detail.

The productivity gained from being able to describe your entire game as a graph where each edge is an asset reference and each node is an asset is incredible in many cases. It enables a better understanding of resources usage through visualization and to automatically optimize asset runtime layouts based on dependencies.

Persistent asset IDs can also enable "serialization of handles" where they are represented on disk as asset IDs but the in-memory representation is a handle that is materialized as the asset is loaded.

Asset Versioning or Version Upgrade

I'd argue that the #1 reason Linux has seen such success is the dedication Linus Torvalds has for maintaining compatibility between versions. When updating from one version of the Linux kernel to another, you never need to update any other applications and this is due to the strict policy of "no user space regressions".

It'd be nice if there was a way for Amethyst to ensure that assets created in older versions are still compatible when updating, or at least that there is an upgrade path. Otherwise Amethyst may end up with people staying on older versions and splitting the community at each major update. I'm not saying that this promise of not breaking people's projects needs to exist right now, but there should be a technical plan for how this can be handled in the future to ensure both a smooth upgrade process for users and preferably a low maintenance cost for the Amethyst developers.

An important note is that it's easier to automatically upgrade people's data than their code. As a data-driven engine that's probably something to embrace.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions