Skip to main content
This quickstart guide covers how to set up and run the Limina container.

日本語
If you’d like to use the Limina-hosted API, which includes the free demo, please create a portal account and use the code examples provided in the portal.

Setting up the Container

The Limina container can be deployed anywhere, but mostly commonly we see customer deployments on AWS and Azure. In all production deployments we recommend a Kubernetes deployment. For detailed installation instructions please see our installation guides.

Logging into the Customer Portal

When you are onboarded, a customer portal account will be created for you. In the portal you can find helpful links, download your license file and instructions on how to login to our container registry and pull the latest container image. If you haven’t received this information yet during your onboarding, please contact our customer support team via your dedicated Slack channel, or at support@private-ai.com

Getting the Container

The Limina container comes in 5 different versions: cpu, cpu-text, gpu, gpu-text and gpu-synthetic. Please see Grabbing the Image for details. In particular, if only process/text is required it is recommended to use the text-only container versions - they are smaller and require less resources to run.
Once you’ve logged into the customer portal, you will find some commands to get the latest container. The container is distributed via a container registry on Azure, which you can login to with the command found in the portal that looks like this:
Docker Command
docker login -u INSERT_UNIQUE_CLIENT_ID -p INSERT_UNIQUE_CLIENT_PW crprivateaiprod.azurecr.io
Please reach out to our customer support if you are experiencing issues.

Getting a License

When you are logged into the customer portal, you should find the license file (or files) assigned to your account in the top right of the main page. There will be a download link beside the file along with information about the license tier and expiry date.

Starting up the Container - CPU

Starting with 3.0, license files are used for authentication. To start the container, please mount the license file as follows:
Docker Command
docker run --rm -v "full path to your license.json file":/app/license/license.json \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:<tag>
For example:
Docker Command
docker run --rm -v "/home/johnsmith/paisandbox/my-license-file.json":/app/license/license.json \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:3.0.0-cpu

Starting up the Container - GPU

To run the GPU container, please install the Nvidia Container Toolkit first. The command to run the container is similar to CPU, except that the device must be specified and shared memory must be set:
Docker Command
docker run --gpus '"device=<GPU_ID, usually 0>"' --shm-size=<4g, only required for non-text container> --rm -v "full path to license.json":/app/license/license.json \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:<version>
Please see Running the Container for more details.

Sending Requests

You can then make a request to the container like this:
{
  "text": [
    "Hello John"
  ]
}

Processing Files

Processing files can be done by sending base64-encoded files to the /process/files/base64 route:
{
  "file": {
    "data": "<file_content_base64>",
    "content_type": "image/jpg"
  }
}
It is also possible to send URIs instead to the /process/files/uri route, avoiding the overhead of base64 encoding and avoiding sending any sensitive information in the request. Please see Processing Files for details.