日本語
If you’d like to use the Limina-hosted API, which includes the free demo, please
create a portal account and use the code examples
provided in the portal.
Setting up the Container
The Limina container can be deployed anywhere, but mostly commonly we see customer deployments on AWS and Azure. In all production deployments we recommend a Kubernetes deployment. For detailed installation instructions please see our installation guides.Logging into the Customer Portal
When you are onboarded, a customer portal account will be created for you. In the portal you can find helpful links, download your license file and instructions on how to login to our container registry and pull the latest container image. If you haven’t received this information yet during your onboarding, please contact our customer support team via your dedicated Slack channel, or at support@private-ai.comGetting the Container
The Limina container comes in 5 different versions:
cpu, cpu-text, gpu,
gpu-text and gpu-synthetic. Please see Grabbing the Image
for details. In particular, if only process/text is required it is recommended to use
the text-only container versions - they are smaller and require less resources to run.Docker Command
Getting a License
When you are logged into the customer portal, you should find the license file (or files) assigned to your account in the top right of the main page. There will be a download link beside the file along with information about the license tier and expiry date.Starting up the Container - CPU
Starting with 3.0, license files are used for authentication. To start the container, please mount the license file as follows:Docker Command
Docker Command
Starting up the Container - GPU
To run the GPU container, please install the Nvidia Container Toolkit first. The command to run the container is similar to CPU, except that the device must be specified and shared memory must be set:Docker Command
Sending Requests
You can then make a request to the container like this:Processing Files
Processing files can be done by sending base64-encoded files to the/process/files/base64 route:
/process/files/uri route, avoiding the overhead of base64 encoding and avoiding sending any sensitive information in the request. Please see Processing Files for details.