-
Running Large Language Models Locally Using Ollama
Last updated: Wednesday, March 4, 2026
Published in: CODE Magazine: 2026 - March/April
Wei-Meng Lee explores the advantages of running large language models (LLMs) directly on personal hardware using Ollama, a platform designed to simplify local deployment and management of AI models. He highlights the benefits of local LLM usage, including enhanced data privacy, reduced costs, offline accessibility, and improved control over model performance and updates—advantages that cloud-based services may not always provide. The article offers a step-by-step guide on using Ollama's desktop and command-line interfaces, integrating models from Hugging Face, and customizing model behavior through Modelfiles. Wei-Meng also delves into hardware requirements, storage management, and the process of converting models to GGUF format, providing developers with practical insights for harnessing the power of LLMs locally while maintaining data ownership and operational flexibility.
-
Natural Language AI-Powered Smart UI
Last updated: Friday, December 26, 2025
Published in: CODE Magazine: 2025 - July/August
Ed Charbeneau delves into the practical integration of AI-powered Smart UI components by leveraging large language models (LLMs), particularly moving beyond the prevalent Generative AI chat demos that often overshadow such applications. He illustrates the gradual enhancement of user interfaces (UIs) through LLMs with real-world examples like Smart AI Search and Smart Text, which improve user interactions by adding contextual and relational features to existing functionalities. Utilizing LLMs, Charbeneau demonstrates transforming user queries into actionable commands for UIs, employing natural language processing to bridge the gap between human language and software interfaces, thereby enhancing user experience while maintaining familiar UI elements.
-
Exploring LangChain: A Practical Approach to Language Models and Retrieval-Augmented Generation (RAG)
Last updated: Friday, December 26, 2025
Published in: CODE Magazine: 2025 - January/February
Wei-Meng Lee provides an in-depth guide to using the LangChain framework for building applications incorporating large language models (LLMs). He emphasizes LangChain's modular design, enabling developers to create complex workflows with customizable components and integrate external data sources, facilitating Retrieval-Augmented Generation. Key topics covered include constructing chains, maintaining conversational context with memory management, and leveraging Microsoft and Hugging Face's models for enhanced flexibility and privacy. Wei-Meng also demonstrates implementing RAG for document-based querying, offering a comprehensive overview of LangChain's capabilities for developing dynamic, data-driven solutions.

