19 questions
2
votes
1
answer
219
views
LM Studio - vk::Device::getFenceStatus: ErrorDeviceLost
For a few days now, I have been having problems with LM Studio (Version 0.3.32).
I have this hardware:
Intel(R) Core(TM) Ultra 7 155U
RAM: 32 GB
Intel(R) Graphics - VRAM 18 GB (Vulkan)
I have this ...
1
vote
1
answer
430
views
LM Studio 0.3.23 (MacOS) does not show option to start local server
Besides using CLI to start a local server in LM Studio, I used to use the respective tab in the LM Studio GUI. However, the GUI does not show any option to start the server anymore (even after ...
0
votes
0
answers
693
views
Using continue plugin in vscode to interface IA provider
Good morning, I have to interface a lmstudio instance running model olympiccoder-7b:2 on a local network at address http://10.125.37.11:1234/v1/.
I want to configure continue plugin of vscode in order ...
0
votes
1
answer
316
views
HTML output in LM Studio
I am using Gemma 3 27B Instruct 8bit from MLX community on LM Studio but I notice it outputs HTML tags, for example "< sup>" and "< /sup>" instead of markdown format. I ...
1
vote
1
answer
538
views
Python request LM Studio Model failed but Curl successful
I tried to request local model by using Python with below code,
import requests
import json
url = 'http://localhost:1234/v1/chat/completions'
headers = {
'Content-Type': 'application/json'
}
...
0
votes
1
answer
400
views
SemanticKernel GetStreamingChatMessageContentsAsync empty but GetChatMessageContentAsync works fine
I just got started with SemanticKernel on local LLM.
I got it working with the following code:
var chat = app.Services.GetRequiredService<IChatCompletionService>();
ChatMessageContent response =...
-2
votes
2
answers
5k
views
What can cause this error in LMStudio : '''Failed to send message vk::Queue::submit: ErrorDeviceLost'''
I will have that error in the following scenarios:
once i ask a second question to a model without realoading it
once i create a new chat with any downloaded without realoading the model
once i try ...
0
votes
1
answer
377
views
Cannot receive request on LMStudio Server from Docker Container Application?
I am trying to self host Khoj AI and I have followed all the steps in the documentation.
The only thing I changed in yml file is:-
- OPENAI_API_BASE=http://localhost:1234/v1/
That is where my ...
1
vote
1
answer
648
views
Trying to create a RAG with everything local. im stuck on using the embeddings
I'm trying to create a RAG, I start by breaking down the document into chunks, send it to a localy hosted embedding model, get the vectors back, and then I get stuck with the FAISS part.
My problem is ...
0
votes
1
answer
638
views
Autogen with LM Studio running llama3
Very new to Autogen. I have a model meta-llama-3.1-8b-instruct running on LMStudio at http://127.0.0.1:1234/v1. I am trying to run the example code provided here at Autogen Getting Started. The ...
2
votes
1
answer
2k
views
How to load a new model in LM Studio 0.3.5
I have installed LM Studio 0.3.5 on my new Mac Mini M4, I have loaded a model (Llama-3.1-8B) and I can chat. Everything is fine.
But now I would like to try other models (Mistral, etc.) and when I ...
1
vote
2
answers
832
views
TypeError: 'NoneType' object is not subscriptable - I can't seem to solve it
I am trying to create a local chatbot using LM Studio. I have the latest OpenAI Python library installed. However, when I run my app.py file, it states that:
line 25, in <module>
response = ...
0
votes
1
answer
52k
views
Failed loading model in LM Studio [closed]
Trying to load in LM Studio the "TheBloke • mistral instruct v0 1 7B q3_k_s gguf" I get the following message.
Failed to load model
Error message
"llama.cpp error: 'vk::Device::...
3
votes
2
answers
6k
views
LM Studio - Failed to load model
I get this error message when loading the model in LM Studio 0.2.31:
Model: stable-diffusion-v1-5-pruned-emaonly-f32.gguf
Error:
"llama.cpp error: 'invalid model: tensor 'cond_stage_model....
0
votes
1
answer
2k
views
Does a model from LM Studio learn?
I just started with local AI models, my question is if I download a model and ask for a task and correct it, does the model remember the correction for the next time, I meant, is it growing in ...
0
votes
2
answers
2k
views
Running LLMs locally causing this error: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing [closed]
I have the following code to send a prompt request to a local LLM, ph-3. Although it shows the correct response in LM studio (check the image), on the VS I receive timeout error. Any help?
var phi3 = ...
4
votes
2
answers
6k
views
What does "I" in the section "_IQ" and "_M" mean in this name "Meta-Llama-3-8B-Instruct-IQ3_M.gguf"?
Appreciate if someone could let me know what does "I" in the section "_IQ" and "_M" mean in this name "Meta-Llama-3-8B-Instruct-IQ3_M.gguf"???
I searched and ...
3
votes
0
answers
5k
views
LM Studio issue in Model Loading
I downloaded the LM Studio version 0.2.18. and downloaded a llm model as shown in the image but it shows "You have 0 models, taking up 0 of disk space." but i have downloaded the model, ...
0
votes
1
answer
1k
views
What is the meaning of "Experts to Use" in a Mixture-of-Experts model?
I'm using Mixtral 8x7b, which is a Mixture of Experts model. I'm using it to translate low-resource languages, and getting decent results.
The option is given (in LM Studio) to "use" 0-8 ...