QVAC Logo

finetune( )

Starts, resumes, inspects, pauses, or cancels a finetuning job for a loaded model.

function finetune(params: FinetuneRunParams, rpcOptions?: RPCOptions): FinetuneHandle;
function finetune(params: FinetuneStopParams | FinetuneGetStateParams, rpcOptions?: RPCOptions): Promise<FinetuneResult>;

Parameters

NameTypeRequired?Description
paramsFinetuneRunParams | FinetuneStopParams | FinetuneGetStateParamsThe finetuning parameters — shape determines the overload
rpcOptionsRPCOptionsOptional RPC transport options

FinetuneRunParams

Used to start or resume a finetuning job.

FieldTypeRequired?Description
modelIdstringThe identifier of the loaded model to finetune
operation"start" | "resume"Omit to let the add-on choose whether to start fresh or resume automatically
optionsFinetuneOptionsFinetuning configuration

FinetuneStopParams

Used to pause or cancel a running job.

FieldTypeRequired?Description
modelIdstringThe identifier of the model
operation"pause" | "cancel"The stop operation

FinetuneGetStateParams

Used to inspect the current state of a finetuning job.

FieldTypeRequired?Description
modelIdstringThe identifier of the model
operation"getState"Must be "getState"
optionsFinetuneOptionsFinetuning configuration

FinetuneOptions

FieldTypeRequired?Description
trainDatasetDirstringDirectory containing the training dataset
validationFinetuneValidationValidation configuration
outputParametersDirstringDirectory where output adapter parameters are written
numberOfEpochsnumberNumber of epochs to run
learningRatenumberLearning rate override
contextLengthnumberContext length override
batchSizenumberBatch size override
microBatchSizenumberMicro batch size override
assistantLossOnlybooleanCompute loss only on assistant tokens
loraRanknumberLoRA rank override
loraAlphanumberLoRA alpha override
loraInitStdnumberLoRA initialization standard deviation
loraSeednumberLoRA initialization seed
loraModulesstringComma-separated LoRA module selection
checkpointSaveDirstringDirectory for checkpoint snapshots
checkpointSaveStepsnumberCheckpoint save interval (in steps)
chatTemplatePathstringCustom chat template path
lrScheduler"constant" | "cosine" | "linear"Learning rate scheduler
lrMinnumberMinimum learning rate
warmupRationumberWarmup ratio (0–1)
warmupRatioSetbooleanEnable warmup ratio
warmupStepsnumberWarmup step count
warmupStepsSetbooleanEnable explicit warmup steps
weightDecaynumberWeight decay override

FinetuneValidation

Discriminated union on type:

VariantFieldsDescription
{ type: "none" }No validation
{ type: "split", fraction?: number }fraction defaults to 0.05Split training data for validation
{ type: "dataset", path: string }Use a separate validation dataset

Returns

The return type depends on the operation:

Run overload (operation omitted, "start", or "resume"):

FinetuneHandle — Object with the following fields:

FieldTypeDescription
progressStreamAsyncGenerator<FinetuneProgress>Stream of training progress ticks
resultPromise<FinetuneResult>Resolves when the job finishes

Reply overload ("pause", "cancel", or "getState"):

Promise<FinetuneResult>

FinetuneProgress

FieldTypeDescription
is_trainbooleanWhether this tick is from the training phase (vs validation)
lossnumber | nullCurrent loss value
loss_uncertaintynumber | nullLoss uncertainty
accuracynumber | nullCurrent accuracy
accuracy_uncertaintynumber | nullAccuracy uncertainty
global_stepsnumberTotal steps completed
current_epochnumberCurrent epoch index
current_batchnumberCurrent batch index
total_batchesnumberTotal batches in the epoch
elapsed_msnumberElapsed time in milliseconds
eta_msnumberEstimated time remaining in milliseconds

FinetuneResult

FieldTypeDescription
type"finetune"Response type discriminator
statusFinetuneStatusCurrent job status
statsFinetuneStats | undefinedFinal training statistics (present when completed)

FinetuneStatus

"IDLE" | "RUNNING" | "PAUSED" | "CANCELLED" | "COMPLETED"

FinetuneStats

FieldTypeDescription
train_lossnumber | undefinedFinal training loss
train_loss_uncertaintynumber | null | undefinedTraining loss uncertainty
val_lossnumber | undefinedFinal validation loss
val_loss_uncertaintynumber | null | undefinedValidation loss uncertainty
train_accuracynumber | undefinedFinal training accuracy
train_accuracy_uncertaintynumber | null | undefinedTraining accuracy uncertainty
val_accuracynumber | undefinedFinal validation accuracy
val_accuracy_uncertaintynumber | null | undefinedValidation accuracy uncertainty
learning_ratenumber | undefinedFinal learning rate
global_stepsnumberTotal steps completed
epochs_completednumberTotal epochs completed

Throws

ErrorWhen
INVALID_RESPONSE_TYPEResponse type does not match expected "finetune"
STREAM_ENDED_WITHOUT_RESPONSEStream ended without receiving the terminal finetune response

Example

const handle = finetune({
  modelId,
  options: {
    trainDatasetDir: "./dataset/train",
    validation: { type: "split", fraction: 0.05 },
    outputParametersDir: "./artifacts/lora",
    numberOfEpochs: 2,
  },
});

for await (const progress of handle.progressStream) {
  console.log(progress.global_steps, progress.loss);
}

console.log(await handle.result);

// Pause a running job
const pauseResult = await finetune({ modelId, operation: "pause" });
console.log(pauseResult.status); // "PAUSED"

// Inspect current state
const state = await finetune({
  modelId,
  operation: "getState",
  options: {
    trainDatasetDir: "./dataset/train",
    validation: { type: "none" },
    outputParametersDir: "./artifacts/lora",
  },
});
console.log(state.status);

On this page