<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ollama |</title><link>https://rupal2k.github.io/rupalportfolio/tags/ollama/</link><atom:link href="https://rupal2k.github.io/rupalportfolio/tags/ollama/index.xml" rel="self" type="application/rss+xml"/><description>Ollama</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sun, 01 Dec 2024 00:00:00 +0000</lastBuildDate><item><title>Local LLM Setup</title><link>https://rupal2k.github.io/rupalportfolio/projects/local-llm/</link><pubDate>Sun, 01 Dec 2024 00:00:00 +0000</pubDate><guid>https://rupal2k.github.io/rupalportfolio/projects/local-llm/</guid><description>&lt;p&gt;A local AI stack using Ollama to run open-source LLMs (Llama 3, Mistral) on personal hardware without cloud costs or data privacy concerns. Includes a Streamlit web UI that integrates the Tavily web search API for real-time information retrieval alongside local model inference.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stack:&lt;/strong&gt; Ollama · Streamlit · Python · Tavily API · REST APIs&lt;/p&gt;</description></item></channel></rss>