ai

Docker

Verified Publisher

Verified Publisher

Docker

San Francisco, CA, USA

Displaying 1 to 30 of 63 repositories

Model

Qwen3-Coder is Qwen’s new series of coding agent models.


Pulls

100K+

Stars

21

Last Updated

13 days

Model

744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)


Pulls

3.1K

Stars

2

Last Updated

13 days

Model

397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages


Pulls

2.6K

Stars

1

Last Updated

13 days

Model

397B MoE model with 17B activation for reasoning, coding, agents, and multimodal understanding


Pulls

10K+

Stars

3

Last Updated

14 days

Model

Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging


Pulls

9.3K

Stars

1

Last Updated

21 days

Model

Efficient 80B MoE coding model with 3B activated params, 256K context, and agentic capabilities


Pulls

10K+

Stars

1

Last Updated

21 days

Model

Image generation model, uses a base latent diffusion model plus a refiner.


Pulls

10K+

Stars

2

Last Updated

about 1 month

Model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.


Pulls

10K+

Stars

3

Last Updated

about 1 month

Model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.


Pulls

10K+

Stars

1

Last Updated

about 1 month

Model

Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.


Pulls

10K+

Stars

4

Last Updated

about 2 months

Model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.


Pulls

4.1K

Stars

1

Last Updated

about 2 months

Model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.


Pulls

6.2K

Stars

2

Last Updated

about 2 months

Model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.


Pulls

10K+

Stars

1

Last Updated

3 months

Model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.


Pulls

10K+

Stars

1

Last Updated

3 months

Model

DeepSeek-V3.2 boosts efficiency and reasoning with DSA, scalable RL, agentic data—IMO/IOI wins.


Pulls

10K+

Stars

9

Last Updated

3 months

Model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use


Pulls

10K+

Stars

4

Last Updated

3 months

Model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use


Pulls

50K+

Stars

2

Last Updated

3 months

Model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.


Pulls

10K+

Stars

2

Last Updated

3 months

Model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.


Pulls

8.5K

Stars

0

Last Updated

3 months

Model

Snowflake’s Arctic-Embed v2.0 boosts multilingual retrieval and efficiency


Pulls

4.1K

Stars

0

Last Updated

4 months

Model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.


Pulls

10K+

Stars

1

Last Updated

4 months

Model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.


Pulls

10K+

Stars

0

Last Updated

4 months

Model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks


Pulls

100K+

Stars

42

Last Updated

4 months

Model

The most advanced Qwen model yet, with major gains in text, vision, video, and reasoning.


Pulls

100K+

Stars

9

Last Updated

4 months

Model

Safety reasoning models for policy-based text classification and foundational safety tasks.


Pulls

10K+

Stars

2

Last Updated

4 months

Model

Qwen3 is the latest Qwen LLM, built for top-tier coding, math, reasoning, and language tasks.


Pulls

500K+

Stars

120

Last Updated

4 months

Model

Granite-4.0-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.


Pulls

8.7K

Stars

0

Last Updated

4 months

Model

Granite-4.0-h-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.


Pulls

4.1K

Stars

1

Last Updated

4 months

Model

Google’s latest Gemma, small yet strong for chat and generation


Pulls

10K+

Stars

1

Last Updated

4 months

Model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks


Pulls

10K+

Stars

1

Last Updated

4 months