In this lesson, we set up Ollama to run local AI models on our machine, making the project accessible to everyone without requiring paid API services. We start by installing Ollama using Homebrew on macOS (though it's also available for Linux and Windows) and ensure it's running as a service. After testing several models, I found the "tavernari/git-commit-message" model to be the most consistent for our use case, so we pull this 4GB model from the Ollama registry.
We test the setup by making some changes to our commit command's description, staging them, and then piping the git diff output directly to the Ollama model. The model successfully generates a reasonable commit message based on our code changes, proving that our local LLM setup is working. While this open-source model isn't as sophisticated as proprietary options like OpenAI, it provides a solid foundation for our project, and we'll add support for other providers later for users who want enhanced capabilities.