0

I'm encountering a TypeScript error while trying to initialize a LLaMA model with quantization enabled using Transformers.js. The compiler is throwing an error indicating that the 'quantized' property isn't recognized in the Pipeline configuration.

How can I properly configure quantization for the Pipeline while satisfying TypeScript's type checking? Is there an alternative way to enable quantization for the LLaMA model in Transformers.js? Would appreciate any guidance on resolving this typing issue while maintaining model quantization functionality.

Object literal may only specify known properties, and 'quantized' does not exist in type '{ 
    task?: string | undefined; 
    model?: PreTrainedModel | undefined; 
    tokenizer?: PreTrainedTokenizer | undefined; 
    processor?: Processor | undefined; 
}'

Environment Details

  • Framework: Transformers.js (v2.17.2)
  • Model: Xenova/LLaMA-70b
  • TypeScript Version: (v5.7.2)
  • Node.js Version: [v22.12.0]

I have tried Explicitly declaring the quantized property as true

class ModelWorker {
  private model: Pipeline | null = null;

  async initializeModel() {
    try {
      const model = await AutoModelForCausalLM.from_pretrained('Xenova/LLaMA-70b');
      this.model = new Pipeline({
        task: 'text-generation',
        model: model,
        quantized: true
      });
    } catch (error) {
      console.error('Error initializing model:', error);
      throw error;
    }
  }
}

USING (tramsformers.js)

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.