DeepSeek AI Model #154582
-
Select Topic AreaGeneral BodyThe deepseek ai model is showing 'Unknown model' error ijn Pydantic examples.....Openai & gemini are working fine..can someone reply the solution......thks |
Beta Was this translation helpful? Give feedback.
Replies: 14 comments 15 replies
-
|
Welcome to the GitHub Community, @srikanthsp20 , we're happy you're here! You are more likely to get a useful response if you are posting your question(s) in the applicable category and are explicit about what your project entails--giving a few more details might help someone give you a nudge in the right direction. I've gone ahead and moved it for you. Good luck! |
Beta Was this translation helpful? Give feedback.
-
|
The OpenAI / Gemini Model or engine is working fine but indicates "Insufficient Quota"..I want to use deepseek which is free, in Pydantic examples but 'Unknown Model' error is displayed...can u reply the solution..thks |
Beta Was this translation helpful? Give feedback.
-
|
DeepSeek AI is a cutting-edge model focusing on generative AI and large language models (LLMs). If you are trying to integrate or fine-tune it, here are a few things to check: Example usage: from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "deepseek/deepseek-ai"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
inputs = tokenizer("Hello, how can I help you?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))Let me know what specific issue you’re facing, and I’ll be happy to help! |
Beta Was this translation helpful? Give feedback.
-
|
Hello Edmon02
I ran the python program offline on my Notebook i3 64bit Intel(R) HD
graphics 3000 and
it showed "Fatal kivy Error"-----"OpenGL 2.0 Not found" ......I updated the
graphics driver from Microsoft
site but still same error...... Kindly reply if its a Hardware problem or
some other solution
is possible....thks for sparing your time..
SPSrikanth
Chennai India
…On Thu, Apr 3, 2025 at 10:42 PM Edmon Sahakyan ***@***.***> wrote:
hello Edmon02, The above code is for uploading the image through Android's
camera...what i want to do is select an area on the screen and get the
audio speech output in the android.. do u think its possible? if so, can u
share the code,..thks
Hi! Thanks for clarifying your requirement—I understand now that you want
to select a specific area on the screen (e.g., a portion of an image or
text displayed on your Android phone) using your finger on the touchscreen,
extract the text from that area, and convert it to audio speech output.
Yes, this is definitely possible! Below, I’ll explain how to approach it
and provide a sample code snippet using Python and Kivy, since Kivy is
well-suited for handling touch events on Android.
How It Can Work:
1. *Display the Image:* Show an image (e.g., a train timetable) on the
screen.
2. *Touch Selection:* Allow the user to drag their finger to draw a
rectangle over the desired area.
3. *Crop the Selected Area:* Extract that portion of the image.
4. *OCR:* Use Tesseract to recognize text in the cropped area.
5. *Text-to-Speech:* Convert the extracted text to audio using gTTS or
another TTS library.
6. *Android Deployment:* Package it as an APK.
Challenges:
- Kivy handles touch events well, but you’ll need to map the screen
coordinates to the image’s coordinates.
- The DeepSeek model might not be necessary here unless you want to
refine the extracted text further (I’ll keep it optional).
Sample Code:
Here’s a basic Kivy app that lets you select an area on an image with your
finger and converts the extracted text to speech:
from kivy.app import Appfrom kivy.uix.widget import Widgetfrom kivy.uix.image import Imagefrom kivy.graphics import Line, Rectanglefrom kivy.core.window import Windowfrom kivy.uix.button import Buttonfrom kivy.uix.boxlayout import BoxLayoutimport pytesseractfrom PIL import Image as PILImagefrom gtts import gTTSimport os
class TouchWidget(Widget):
def __init__(self, **kwargs):
super(TouchWidget, self).__init__(**kwargs)
self.start_pos = None
self.end_pos = None
self.image_path = "timetable.jpg" # Replace with your image path
self.img = PILImage.open(self.image_path)
# Add the image to the widget
with self.canvas:
self.image = Image(source=self.image_path, size=Window.size)
def on_touch_down(self, touch):
# Record the starting position of the touch
self.start_pos = (touch.x, touch.y)
with self.canvas:
self.rect = Rectangle(pos=self.start_pos, size=(1, 1))
def on_touch_move(self, touch):
# Update the rectangle as the finger moves
self.end_pos = (touch.x, touch.y)
with self.canvas:
self.canvas.remove(self.rect)
width = self.end_pos[0] - self.start_pos[0]
height = self.end_pos[1] - self.start_pos[1]
self.rect = Rectangle(pos=self.start_pos, size=(width, height))
def on_touch_up(self, touch):
# When the finger is lifted, process the selected area
self.end_pos = (touch.x, touch.y)
self.process_selection()
def process_selection(self):
# Crop the selected area from the image
x1, y1 = self.start_pos
x2, y2 = self.end_pos
# Convert Kivy coordinates (bottom-left origin) to PIL (top-left origin)
img_height = self.img.height
crop_box = (min(x1, x2), img_height - max(y1, y2), max(x1, x2), img_height - min(y1, y2))
cropped_img = self.img.crop(crop_box)
# Extract text using OCR
text = pytesseract.image_to_string(cropped_img)
print(f"Extracted Text: {text}")
# Convert text to speech
if text.strip():
tts = gTTS(text=text, lang="en") # Change "en" to "ta" for Tamil, etc.
tts.save("output.mp3")
# Play the audio (you may need a media player library like `kivy.core.audio`)
from kivy.core.audio import SoundLoader
sound = SoundLoader.load("output.mp3")
if sound:
sound.play()
else:
print("No text detected in the selected area.")
class MyApp(App):
def build(self):
layout = BoxLayout(orientation="vertical")
touch_widget = TouchWidget()
layout.add_widget(touch_widget)
return layout
if __name__ == "__main__":
MyApp().run()
How to Use This Code:
1. *Install Dependencies:*
- On your PC: pip install kivy pytesseract pillow gtts
- Install Tesseract OCR on your system (e.g., sudo apt install
tesseract-ocr on Linux).
2. *Prepare an Image:*
- Replace "timetable.jpg" with the path to your image file (e.g., a
train timetable photo).
3. *Run on PC First:*
- Test the code on your PC. Touch and drag with your mouse to
select an area, and it’ll extract text and play it as speech.
4. *Deploy to Android:*
- Use *Buildozer* (pip install buildozer) to package it into an
APK:
- Create a buildozer.spec file with buildozer init.
- Edit the spec file to include dependencies: requirements =
kivy, pillow, pytesseract, gtts.
- Run buildozer android debug to generate the APK.
- Install the APK on your Android phone and use your finger to
select the area.
Notes:
- *Image Source:* This code assumes a preloaded image (timetable.jpg).
To use the camera, you’d need to integrate Kivy’s Camera widget or
Chaquopy with Android’s camera API—let me know if you want that instead!
- *Multilingual Support:* Change the lang parameter in gTTS (e.g., "ta"
for Tamil, "hi" for Hindi).
- *DeepSeek Optional:* If you want to use DeepSeek to refine the OCR
text, add it before the TTS step (like in my previous reply), but it might
need an API for mobile use.
Let me know if you run into issues setting this up or need help with
camera integration instead of a static image! I’m happy to refine the code
further.
—
Reply to this email directly, view it on GitHub
<#154582 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BJ2YREQ4GA6EGBJZJFCDVLD2XVTX7AVCNFSM6AAAAABZPTYGWOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTENZRGY4TOOI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
The "Unknown model" error in DeepSeek-AI with Pydantic likely means the model name isn't supported or integrated properly. Ensure you're using a valid model name and that the DeepSeek API is correctly configured in your client. |
Beta Was this translation helpful? Give feedback.
This comment was marked as spam.
This comment was marked as spam.
-
Beta Was this translation helpful? Give feedback.
-
|
try to find more Reasearch analysis |
Beta Was this translation helpful? Give feedback.
-
|
Hi Srikanth, Thanks for sharing your detailed updates — I admire your persistence! Regarding your latest question: “Will a USB adapter to a secondary Monitor/TV work as a substitute for OpenGL 2.0?” Unfortunately, a USB-to-HDMI or USB display adapter won’t solve the OpenGL 2.0 limitation, because the adapter doesn’t improve the GPU capabilities or driver support of your internal graphics card (Intel HD Graphics 3000). OpenGL is handled by the GPU and its driver, not by the external display output. Here are a few options to consider:
python
ML Kit for on-device OCR Android’s built-in TTS Developed via Java/Kotlin or via Python+Chaquopy (if you still prefer Python) Let me know if you need help setting up a lightweight mobile app or exploring OCR alternatives that don’t depend on OpenGL. |
Beta Was this translation helpful? Give feedback.
-
|
i Will agree with you |
Beta Was this translation helpful? Give feedback.
-
|
“DeepSeek models aren’t yet officially integrated into the OpenAI or Pydantic schemas. That ‘Unknown model’ error means the client library doesn’t recognize the model ID. You need to either:
If you’re using the OpenAI Python client, ensure the base URL points to DeepSeek’s endpoint and not api.openai.com. Otherwise, Pydantic validation will fail since ‘deepseek’ isn’t in the known model list.” |
Beta Was this translation helpful? Give feedback.
-
|
The “Unknown model” error in Pydantic AI appears because the library doesn’t recognize the model name you gave for DeepSeek. This usually happens if: You’re using the wrong model name (like "deepseek-chat" instead of the exact one, e.g. "deepseek-chat-v2"). The DeepSeek provider isn’t set correctly. Your Pydantic AI version is outdated. Fix: Update Pydantic AI pip install --upgrade pydantic_ai Use the correct model and provider from pydantic_ai.models.openai import OpenAIChatModel model = OpenAIChatModel( Ensure API key and endpoint are valid (check your DeepSeek dashboard). |
Beta Was this translation helpful? Give feedback.
-
|
I think it is due to old age software and hardware it is do not supoted .or old version of software .for ex some services of youtube not run in android 7 or 8 so need to upgrade same with 10 also.or might be low version of software please upgrade it :) |
Beta Was this translation helpful? Give feedback.
-
Make sure you’re using the correct DeepSeek model ID and have configured the DeepSeek provider in Pydantic AI. Updating to the latest Pydantic AI version often fixes the “Unknown model” error. |
Beta Was this translation helpful? Give feedback.

Hello,
Good Day Again- The same error- "Found OpenGL 1.1" and "Not found OpenGL 2.0" is displaying
"Upgrade Hardware and/or drivers" message
I dont have another system to test...will an USB adapter to secondary Monitor/TV work as a substitute for OpenGL 2.0??
Thks for your time
Srikanth SP
Chennai India