Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)

Enlighten SDK 3.10 Documentation
Results will update as you type.
  • Welcome to Enlighten
  • How Enlighten works
  • Artist workflow
  • Install Enlighten
  • Libraries
  • Implementation guide
    • The Enlighten scene
    • The precompute process
    • The lightmap UV pipeline
    • Enlighten runtime data
    • Runtime radiosity updates
    • Sample implementation
  • Technical reference
  • Advanced techniques
  • Tools
  • Enlighten Mobile
  • White papers
  • Third-party licences
  • Release notes
    Calendars

You‘re viewing this with anonymous access, so some content might be blocked.
/
The precompute process

    This is the documentation for Enlighten.

    The precompute process

    Nov 21, 2019


    The precompute tools are available in prebuilt binary form for Windows, Linux and MacOS.

    The precompute operates on the enlighten scene exported by your editing tools.

    Use the High Level Build System to run the precompute. The HLBS is a powerful parallel build system, which provides all of the functionality used by a typical implementation of Enlighten.

    The precompute creates the __Build_<scene>__ directory at the scene root. This directory contains all intermediate and output files generated by the build, including the Enlighten runtime data.

    To take advantage of incremental builds, when you export the Enlighten scene, for each file you write:

    1. Write the content of the file to a temporary buffer.
    2. If the file already exists on disk, load the existing file.
    3. If the existing file is identical to the temporary buffer, don't write the file to disk.

    Hardware

    The precompute performs a lot of CPU intensive computation and runs best on a CPU with many and as fast cores as possible. With a very large scene, some precompute tasks may require multiple GB of physical memory.

    The time taken by the precompute is reduced in approximate inverse proportion to the number of logical cores available. To maximize CPU use, make the additional logical cores provided by Hyper-threading available. For example, an 8-core CPU with Hyper-threading provides up to 16 logical cores for the distributed precompute.

    To speed up the precompute, we recommend to distribute the HLBS build across many machines.

    License

    To run the Enlighten precompute, you need a valid Enlighten license file. Each licence file is valid for a limited period. 

    The High-level Build System worker process GeoPrecomp2 checks for a file called License.txt first in the same directory and then each parent directory.

    The Low-level precompute API requires that you call IPrecompute::SetLicense() passing it the Base64 encoded licence data. In your licence file, this is the block of data below the line starting License:.

    For licensing support, please contact Enlighten Support.

    Advanced usage

    If you need complete flexibility, you can also run individual precompute tasks at the lowest level using the low level precompute API.

    , multiple selections available,
    {"serverDuration": 11, "requestCorrelationId": "2a1b129e0c2948a19914d161f5e23092"}