Muna supports requesting predictions to be created on serverless containers, backed by powerful GPUs.
All predictors compiled for Linux x86_64 on Muna support remote predictions by default.
Remote predictions are an experimental feature, and can be drastically altered or removed on short notice.

Making a Remote Prediction

Use the muna.beta.predictions.remote.create method to request a prediction to be created in the cloud:
import { Muna } from "muna"

// 💥 Create your Muna client
const muna = new Muna({ accessKey: "..." });

// 🔥 Run the prediction remotely
const prediction = await muna.beta.predictions.remote.create({
    tag: "@fxn/greeting",
    inputs: { name: "Yusuf" }
});

// 🚀 Print the result
console.log(prediction.results[0]);

Leveraging GPU Acceleration

One advantage with remote predictions is having access to orders of magnitude more compute than on your local device. Muna supports specifying a RemoteAcceleration when creating remote predictions:
// Create a remote prediction on an Nvidia A100 GPU
const prediction = await muna.beta.predictions.remote.create({
    tag: "@meta/llama-3.1-70b",
    inputs: { ... },
    acceleration: "remote_a100"
});
Below are the currently supported types of RemoteAcceleration:
Remote AccelerationNotes
RemoteAcceleration.AutoAutomatically use the ideal remote acceleration.
RemoteAcceleration.CPUPredictions are run on AMD CPU servers.
RemoteAcceleration.A40Predictions are run on an Nvidia A40 GPU.
RemoteAcceleration.A100Predictions are run on an Nvidia A100 GPU.
Remote predictions are priced by the remote acceleration, per second of prediction time (i.e. prediction.latency). See our pricing for more information.
If you want to self-host the remote acceleration servers in your VPC or on-prem, reach out to us.