Skip to main content
Filter by
Sorted by
Tagged with
2 votes
1 answer
219 views

For a few days now, I have been having problems with LM Studio (Version 0.3.32). I have this hardware: Intel(R) Core(TM) Ultra 7 155U RAM: 32 GB Intel(R) Graphics - VRAM 18 GB (Vulkan) I have this ...
VirtualCom's user avatar
1 vote
1 answer
430 views

Besides using CLI to start a local server in LM Studio, I used to use the respective tab in the LM Studio GUI. However, the GUI does not show any option to start the server anymore (even after ...
Heka's user avatar
  • 179
0 votes
0 answers
693 views

Good morning, I have to interface a lmstudio instance running model olympiccoder-7b:2 on a local network at address http://10.125.37.11:1234/v1/. I want to configure continue plugin of vscode in order ...
Giox79's user avatar
  • 156
0 votes
1 answer
316 views

I am using Gemma 3 27B Instruct 8bit from MLX community on LM Studio but I notice it outputs HTML tags, for example "< sup>" and "< /sup>" instead of markdown format. I ...
languageoftheuniverse's user avatar
1 vote
1 answer
538 views

I tried to request local model by using Python with below code, import requests import json url = 'http://localhost:1234/v1/chat/completions' headers = { 'Content-Type': 'application/json' } ...
leo0807's user avatar
  • 1,576
0 votes
1 answer
400 views

I just got started with SemanticKernel on local LLM. I got it working with the following code: var chat = app.Services.GetRequiredService<IChatCompletionService>(); ChatMessageContent response =...
Pierre's user avatar
  • 122
-2 votes
2 answers
5k views

I will have that error in the following scenarios: once i ask a second question to a model without realoading it once i create a new chat with any downloaded without realoading the model once i try ...
Arnold Hge's user avatar
0 votes
1 answer
377 views

I am trying to self host Khoj AI and I have followed all the steps in the documentation. The only thing I changed in yml file is:- - OPENAI_API_BASE=http://localhost:1234/v1/ That is where my ...
Max Korhonen's user avatar
1 vote
1 answer
648 views

I'm trying to create a RAG, I start by breaking down the document into chunks, send it to a localy hosted embedding model, get the vectors back, and then I get stuck with the FAISS part. My problem is ...
aceofjohnonlone's user avatar
0 votes
1 answer
638 views

Very new to Autogen. I have a model meta-llama-3.1-8b-instruct running on LMStudio at http://127.0.0.1:1234/v1. I am trying to run the example code provided here at Autogen Getting Started. The ...
Arindam's user avatar
  • 326
2 votes
1 answer
2k views

I have installed LM Studio 0.3.5 on my new Mac Mini M4, I have loaded a model (Llama-3.1-8B) and I can chat. Everything is fine. But now I would like to try other models (Mistral, etc.) and when I ...
user3102556's user avatar
1 vote
2 answers
832 views

I am trying to create a local chatbot using LM Studio. I have the latest OpenAI Python library installed. However, when I run my app.py file, it states that: line 25, in <module> response = ...
N T's user avatar
  • 11
0 votes
1 answer
52k views

Trying to load in LM Studio the "TheBloke • mistral instruct v0 1 7B q3_k_s gguf" I get the following message. Failed to load model Error message "llama.cpp error: 'vk::Device::...
Yiannis Bakopoulos's user avatar
3 votes
2 answers
6k views

I get this error message when loading the model in LM Studio 0.2.31: Model: stable-diffusion-v1-5-pruned-emaonly-f32.gguf Error: "llama.cpp error: 'invalid model: tensor 'cond_stage_model....
Babak A's user avatar
  • 91
0 votes
1 answer
2k views

I just started with local AI models, my question is if I download a model and ask for a task and correct it, does the model remember the correction for the next time, I meant, is it growing in ...
jos3m's user avatar
  • 135
0 votes
2 answers
2k views

I have the following code to send a prompt request to a local LLM, ph-3. Although it shows the correct response in LM studio (check the image), on the VS I receive timeout error. Any help? var phi3 = ...
renakre's user avatar
  • 8,331
4 votes
2 answers
6k views

Appreciate if someone could let me know what does "I" in the section "_IQ" and "_M" mean in this name "Meta-Llama-3-8B-Instruct-IQ3_M.gguf"??? I searched and ...
Franva's user avatar
  • 7,147
3 votes
0 answers
5k views

I downloaded the LM Studio version 0.2.18. and downloaded a llm model as shown in the image but it shows "You have 0 models, taking up 0 of disk space." but i have downloaded the model, ...
Try try's user avatar
  • 31
0 votes
1 answer
1k views

I'm using Mixtral 8x7b, which is a Mixture of Experts model. I'm using it to translate low-resource languages, and getting decent results. The option is given (in LM Studio) to "use" 0-8 ...
Laizer's user avatar
  • 6,210