Skip to main content
With Muna, application developers and AI agents can run AI models in two lines of code. Specifically:
  • AI anywhere: Muna converts Python inference code into executable binaries that can be run locally and in the cloud.
  • Self-contained inference: Muna creates executable binaries that have zero dependencies, and can be used by application developers and AI agents in two lines of code.
  • OpenAI compatible: Muna ships an OpenAI client that allows developers to evaluate and deploy millions of open-source AI models with minimal changes.
  • Hardware acceleration: Muna is designed to use all available compute acceleration hardware (e.g. GPU, NPU) and instruction sets.

Installing Muna

We provide SDKs for common development frameworks:
# Run this in Terminal
$ npm install muna
Most of our client SDKs are open-source. Star them on GitHub!

Making your First Prediction

First, head over to Muna to create an account and generate an access key. Now, let’s run the @fxn/greeting predictor which accepts a name and returns a friendly greeting:
import { Muna } from "muna"

// πŸ’₯ Create a Muna client
const muna = new Muna({ accessKey: "..." });

// πŸ”₯ Make a prediction
const prediction = await muna.predictions.create({
  tag: "@fxn/greeting",
  inputs: { name: "Yusuf" }
});

// πŸš€ Print the result
console.log(prediction.results[0]);
You should see a friendly greeting. Congrats on your first prediction πŸŽ‰