Skip to content

localai 2.29.0 #223250

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

localai 2.29.0 #223250

wants to merge 1 commit into from

Conversation

BrewTestBot
Copy link
Member

@BrewTestBot BrewTestBot commented May 12, 2025

Created by brew bump


Created with brew bump-formula-pr.

  • resource blocks have been checked for updates.
release notes




v2.29.0

I am thrilled to announce the release of LocalAI v2.29.0! This update focuses heavily on refining our container image strategy, making default images leaner and providing clearer options for users needing specific features or hardware acceleration. We've also added support for new models like Qwen3, enhanced existing backends, and introduced experimental endpoints, like video generation!

⚠️ Important: Breaking Changes

This release includes significant changes to container image tagging and contents. Please review carefully:

  • Python Dependencies Moved: Images containing extra Python dependencies (like those for diffusers) now require the -extras suffix (e.g., latest-gpu-nvidia-cuda-12-extras). Default images are now slimmer and do not include these dependencies.
  • FFmpeg is Now Standard: All core images now include FFmpeg. The separate -ffmpeg tags have been removed. If you previously used an -ffmpeg tagged image, simply switch to the corresponding base image tag (e.g., latest-gpu-hipblas-ffmpeg becomes latest-gpu-hipblas).

Here below some examples, note that the CI is still publishing the images so won't be available until jobs are processed, and the installation scripts will be updated right after images are publicly available.

CPU only image:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest

NVIDIA GPU Images:

# CUDA 12.0 with core features
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12

# CUDA 12.0 with extra Python dependencies
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12-extras

# CUDA 11.7 with core features
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-11

# CUDA 11.7 with extra Python dependencies
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-11-extras

# NVIDIA Jetson (L4T) ARM64
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nvidia-l4t-arm64

AMD GPU Images (ROCm):

# ROCm with core features
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas

# ROCm with extra Python dependencies
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas-extras

Intel GPU Images (oneAPI):

# Intel GPU with FP16 support
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel-f16

# Intel GPU with FP16 support and extra dependencies
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel-f16-extras

# Intel GPU with FP32 support
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel-f32

# Intel GPU with FP32 support and extra dependencies
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-intel-f32-extras

Vulkan GPU Images:

# Vulkan with core features
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan

AIO Images (pre-downloaded models):

# CPU version
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu

# NVIDIA CUDA 12 version
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12

# NVIDIA CUDA 11 version
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-11

# Intel GPU version
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel-f16

# AMD GPU version
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas

For more information about the AIO images and pre-downloaded models, see Container Documentation.

Key Changes in v2.29.0

📦 Container Image Overhaul

  • -extras Suffix: Images with additional Python dependencies are now identified by the -extras suffix.
  • Default Images: Standard tags (like latest, latest-gpu-nvidia-cuda-12) now provide core LocalAI functionality without the extra Python libraries.
  • FFmpeg Inclusion: FFmpeg is bundled in all images, simplifying setup for multimedia tasks.
  • New latest-* Tags: Added specific latest tags for various GPU architectures:
    • latest-gpu-hipblas (AMD ROCm)
    • latest-gpu-intel-f16 (Intel oneAPI FP16)
    • latest-gpu-intel-f32 (Intel oneAPI FP32)
    • latest-gpu-nvidia-cuda-12 (NVIDIA CUDA 12)
    • latest-gpu-vulkan (Vulkan)

🚀 New Features & Enhancements

  • Qwen3 Model Support: Officially integrated support for the Qwen3 model family.
  • Experimental Auto GPU Offload: LocalAI can now attempt to automatically detect GPUs and configure optimal layer offloading for llama.cpp and CLIP.
  • Whisper.cpp GPU Acceleration: Updated whisper.cpp and enabled GPU support via cuBLAS (NVIDIA) and Vulkan. SYCL and Hipblas support are in progress.
  • Experimental Video Generation: Introduced a /video/generations endpoint. Stay tuned for compatible model backends!
  • Installer Uninstall Option: The install.sh script now includes a --uninstall flag for easy removal.
  • Expanded Hipblas Targets: Added support for a wider range of AMD GPU architectures. gfx803,gfx900,gfx906,gfx908,gfx90a,gfx942,gfx1010,gfx1030,gfx1032,gfx1100,gfx1101,gfx1102

🧹 Backend Updates

  • AutoGPTQ Backend Removed: This backend has been dropped due to being discontinued upstream.
  • llama.cpp experimental support to automatically detect GPU layers offloading.

The Complete Local Stack for Privacy-First AI

With LocalAGI rejoining LocalAI alongside LocalRecall, our ecosystem provides a complete, open-source stack for private, secure, and intelligent AI operations:

LocalAI Logo

LocalAI

The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.

Link: https://github.com/mudler/LocalAI

LocalAGI Logo

LocalAGI

A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.

Link: https://github.com/mudler/LocalAGI

LocalRecall Logo

LocalRecall

A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI.

Link: https://github.com/mudler/LocalRecall

Join the Movement! ❤️

A massive THANK YOU to our incredible community! LocalAI has over 32,500 stars, and LocalAGI has already rocketed past 650+ stars!

As a reminder, LocalAI is real FOSS (Free and Open Source Software) and its sibling projects are community-driven and not backed by VCs or a company. We rely on contributors donating their spare time. If you love open-source, privacy-first AI, please consider starring the repos, contributing code, reporting bugs, or spreading the word!

👉 Check out the reborn LocalAGI v2 today: https://github.com/mudler/LocalAGI

Let's continue building the future of AI, together! 🙌

Full changelog :point_down:

:point_right: Click to expand :point_left:

What's Changed

Breaking Changes 🛠

Bug fixes :bug:

Exciting New Features 🎉

🧠 Models

📖 Documentation and examples

👒 Dependencies

Other Changes

New Contributors

Full Changelog: mudler/LocalAI@v2.28.0...v2.29.0


@github-actions github-actions bot added go Go use is a significant feature of the PR or issue bump-formula-pr PR was created using `brew bump-formula-pr` labels May 12, 2025
@chenrui333
Copy link
Member

rice append --exec local-ai
make: rice: No such file or directory

@chenrui333 chenrui333 added the build failure CI fails while building the software label May 12, 2025
Copy link
Contributor

This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. To keep this pull request open, add a help wanted or in progress label.

@github-actions github-actions bot added the stale No recent activity label May 15, 2025
@github-actions github-actions bot closed this May 16, 2025
@github-actions github-actions bot deleted the bump-localai-2.29.0 branch May 16, 2025 00:15
@chenrui333
Copy link
Member

rice append --exec local-ai
make: rice: No such file or directory

#227464

@chenrui333 chenrui333 added the superseded PR was replaced by another PR label Jun 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build failure CI fails while building the software bump-formula-pr PR was created using `brew bump-formula-pr` go Go use is a significant feature of the PR or issue stale No recent activity superseded PR was replaced by another PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants