Skip to content

AI Namespace

The ai namespace provides AI/LLM service providers.

ts
import { ai } from "nevr-env/plugins";

Providers

ProviderDescription
ai.openai()OpenAI - GPT, DALL-E, Whisper, Embeddings

OpenAI

Full-featured OpenAI integration with organization support, Azure OpenAI, and model configuration.

Basic Usage

ts
import { createEnv } from "nevr-env";
import { ai } from "nevr-env/plugins";

const env = createEnv({
  plugins: [ai.openai()],
  runtimeEnv: process.env,
});

// env.OPENAI_API_KEY

Options

OptionTypeDefaultDescription
organizationbooleanfalseInclude organization ID
projectbooleanfalseInclude project ID
modelbooleanfalseInclude model configuration
defaultModelstring"gpt-4o"Default model name
azurebooleanfalseInclude Azure OpenAI config
baseUrlbooleanfalseInclude custom base URL
embeddingbooleanfalseInclude embedding model config
parametersbooleanfalseInclude default parameters
variableNamesobject-Custom variable names
extendobject-Extend schema with custom fields

Environment Variables by Option

Default (no options)

VariableRequiredFormatDescription
OPENAI_API_KEYsk-*OpenAI API key

organization: true

VariableRequiredFormatDescription
OPENAI_ORG_IDorg-*Organization ID

project: true

VariableRequiredDescription
OPENAI_PROJECT_IDProject ID for newer OpenAI projects

model: true

VariableRequiredDefaultDescription
OPENAI_MODELgpt-4oDefault model to use

Supported models:

  • gpt-4o, gpt-4o-mini
  • gpt-4-turbo, gpt-4
  • gpt-3.5-turbo
  • o1, o1-mini, o1-preview

azure: true

VariableRequiredDefaultDescription
AZURE_OPENAI_ENDPOINT-Azure OpenAI endpoint URL
AZURE_OPENAI_API_VERSION2024-02-01API version
AZURE_OPENAI_DEPLOYMENT-Deployment name

baseUrl: true

VariableRequiredDescription
OPENAI_BASE_URLCustom base URL (proxies, alternative endpoints)

embedding: true

VariableRequiredDefaultDescription
OPENAI_EMBEDDING_MODELtext-embedding-3-smallEmbedding model

parameters: true

VariableRequiredDefaultDescription
OPENAI_MAX_TOKENS4096Maximum tokens per request
OPENAI_TEMPERATURE0.7Sampling temperature (0-2)

Examples

ts
// With organization and model selection
ai.openai({
  organization: true,
  model: true,
  defaultModel: "gpt-4o",
})

// Azure OpenAI
ai.openai({
  azure: true,
})

// RAG application with embeddings
ai.openai({
  model: true,
  embedding: true,
  parameters: true,
})

// With custom OpenAI-compatible endpoint
ai.openai({
  baseUrl: true,
})

// Extend with assistant configuration
ai.openai({
  extend: {
    OPENAI_ASSISTANT_ID: z.string().startsWith("asst_"),
    OPENAI_VECTOR_STORE_ID: z.string().startsWith("vs_").optional(),
    OPENAI_THREAD_ID: z.string().startsWith("thread_").optional(),
  }
})

Integration Example

ts
// env.ts
import { createEnv } from "nevr-env";
import { ai } from "nevr-env/plugins";

export const env = createEnv({
  plugins: [
    ai.openai({
      organization: true,
      model: true,
    }),
  ],
  runtimeEnv: process.env,
});

// openai.ts
import OpenAI from "openai";
import { env } from "./env";

export const openai = new OpenAI({
  apiKey: env.OPENAI_API_KEY,
  organization: env.OPENAI_ORG_ID,
});

// chat.ts
import { env } from "./env";
import { openai } from "./openai";

export async function chat(prompt: string) {
  const response = await openai.chat.completions.create({
    model: env.OPENAI_MODEL ?? "gpt-4o",
    messages: [{ role: "user", content: prompt }],
  });
  return response.choices[0].message.content;
}

Azure OpenAI Example

ts
// env.ts
import { createEnv } from "nevr-env";
import { ai } from "nevr-env/plugins";

export const env = createEnv({
  plugins: [
    ai.openai({ azure: true }),
  ],
  runtimeEnv: process.env,
});

// azure-openai.ts
import { AzureOpenAI } from "openai";
import { env } from "./env";

export const openai = new AzureOpenAI({
  endpoint: env.AZURE_OPENAI_ENDPOINT,
  apiVersion: env.AZURE_OPENAI_API_VERSION,
  deployment: env.AZURE_OPENAI_DEPLOYMENT,
});

Released under the MIT License.