<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Discovering Tech and AI]]></title><description><![CDATA[I'm Melvin aka DonvitoCodes on social media, an AI practitioner and software developer. With my growing experience in AI development and implementation, I am ea]]></description><link>https://blog.donvitocodes.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 06:14:13 GMT</lastBuildDate><atom:link href="https://blog.donvitocodes.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Using the Claude Agent SDK for Non-Coding Workflows]]></title><description><![CDATA[I’ve been exploring the Claude Agent SDK, and I had this idea — why not use it for non-coding workflows instead of relying on other agent frameworks like CrewAI or LangChain?
To validate the idea, I built a simple example: a news researcher agent tha...]]></description><link>https://blog.donvitocodes.com/using-the-claude-agent-sdk-for-non-coding-workflows</link><guid isPermaLink="true">https://blog.donvitocodes.com/using-the-claude-agent-sdk-for-non-coding-workflows</guid><category><![CDATA[claude agent]]></category><category><![CDATA[AI]]></category><category><![CDATA[claude-code]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Thu, 30 Oct 2025 12:06:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761825488445/36e27f2e-706f-4d51-a6a4-fcc4bd077b3b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I’ve been exploring the <a target="_blank" href="https://docs.claude.com/en/api/agent-sdk/overview"><strong>Claude Agent SDK</strong></a>, and I had this idea — why not use it for <em>non-coding workflows</em> instead of relying on other agent frameworks like <strong>CrewAI</strong> or <strong>LangChain</strong>?</p>
<p>To validate the idea, I built a simple example: a <strong>news researcher agent</strong> that finds the latest AI news and translates it into Korean.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761825468875/cb2f638a-b742-429b-aa28-47ee752af6c0.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-setup"><strong>The Setup</strong></h2>
<p>Here’s the core script:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> asyncio
<span class="hljs-keyword">from</span> claude_agent_sdk <span class="hljs-keyword">import</span> query, ClaudeAgentOptions, AgentDefinition
<span class="hljs-keyword">from</span> claude_agent_sdk.types <span class="hljs-keyword">import</span> McpHttpServerConfig
<span class="hljs-keyword">import</span> os

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>():</span>
    firecrawl_api_key = os.environ[<span class="hljs-string">'FIRECRAWL_API_KEY'</span>]
    firecrawl_mcp = McpHttpServerConfig(
        type=<span class="hljs-string">"http"</span>,
        url=<span class="hljs-string">"https://mcp.firecrawl.dev/v2/mcp"</span>,
        headers={<span class="hljs-string">"Authorization"</span>: <span class="hljs-string">f"Bearer <span class="hljs-subst">{firecrawl_api_key}</span>"</span>}
    )

    translator_agent = AgentDefinition(
        description=<span class="hljs-string">"Translate the content from any language to any other language."</span>,
        prompt=<span class="hljs-string">"You are an expert language translator."</span>,
        tools=[<span class="hljs-string">"Read"</span>, <span class="hljs-string">"Edit"</span>, <span class="hljs-string">"Bash"</span>, <span class="hljs-string">"Grep"</span>],
        model=<span class="hljs-string">"sonnet"</span>
    )

    options = ClaudeAgentOptions(
        model=<span class="hljs-string">"glm-4.6"</span>,
        system_prompt=<span class="hljs-string">"You are an expert news researcher."</span>,
        permission_mode=<span class="hljs-string">'bypassPermissions'</span>,
        cwd=<span class="hljs-string">"/Users/melvin/PycharmProjects/ClaudeCodeSDK/output"</span>,
        mcp_servers={<span class="hljs-string">"firecrawl_mcp"</span>: firecrawl_mcp},
        agents={<span class="hljs-string">"translator-agent"</span>: translator_agent}
    )

    <span class="hljs-keyword">async</span> <span class="hljs-keyword">for</span> message <span class="hljs-keyword">in</span> query(
        prompt=(
            <span class="hljs-string">"What are the latest news topics in AI? "</span>
            <span class="hljs-string">"Write the results to a markdown file with URLs as references. "</span>
            <span class="hljs-string">"Then use the translator-agent to translate the content to Korean "</span>
            <span class="hljs-string">"and save it to a separate markdown file."</span>
        ),
        options=options
    ):
        print(message)

asyncio.run(main())
</code></pre>
<h2 id="heading-concept-1-using-mcps-for-data-retrieval"><strong>Concept 1: Using MCPs for Data Retrieval</strong></h2>
<p>I used the <a target="_blank" href="https://docs.firecrawl.dev/mcp-server">Firecrawl MCP</a> to fetch the latest AI news.</p>
<p>The agent gathered data, summarized it, and wrote the results into a Markdown file — all autonomously.</p>
<p>This shows how an MCP can act like an API plugin layer, enabling agents to perform real-world data collection beyond simple prompts.</p>
<hr />
<h2 id="heading-concept-2-sub-agents-for-specialized-tasks"><strong>Concept 2: Sub-Agents for Specialized Tasks</strong></h2>
<p>After gathering the news, I wanted a translated version.</p>
<p>Instead of hardcoding translation logic, I created a <strong>sub-agent</strong> — the translator-agent — specifically for that purpose.</p>
<p>The main agent then delegated the translation task to the sub-agent.</p>
<p><a target="_blank" href="https://firecrawl.dev">The output</a>:</p>
<ul>
<li><p>ai_news_en.md – English summary</p>
</li>
<li><p>ai_news_ko.md – Korean translation</p>
</li>
</ul>
<hr />
<h2 id="heading-why-this-matters"><strong>Why This Matters</strong></h2>
<p>The <strong>Claude Agent SDK</strong> already supports:</p>
<ul>
<li><p><strong>Tools</strong> (Read, Edit, Bash, etc.)</p>
</li>
<li><p><strong>MCPs</strong> (external capability servers)</p>
</li>
<li><p><strong>Skills</strong></p>
</li>
<li><p><strong>Sub-agents</strong></p>
</li>
</ul>
<p>These are the same components other AI agent frameworks build from scratch — but here, it’s all native to Claude’s ecosystem.</p>
<p>With what <strong>Claude AI</strong> has built, developers and researchers can rapidly compose workflows that go beyond chat — from document generation to automated pipelines.</p>
<p>I used the <strong>GLM 4.6</strong> model in this example, but of course, it works <a target="_blank" href="https://firecrawl.dev">perfectly with</a> Claude models like <strong>Haiku</strong> and <strong>Sonnet</strong>.</p>
<h2 id="heading-a-great-alternative-to-ai-agent-frameworks"><strong>A Great Alternative to AI Agent Frameworks</strong></h2>
<p>Frameworks like CrewAI and LangChain are excellent for building complex agent systems — but sometimes, simplicity wins.</p>
<p>The <strong>Claude Agent SDK</strong> gives you the same building blocks — tools, sub-agents, and external connectors — in a lightweight package that integrates naturally with Claude’s ecosystem.</p>
]]></content:encoded></item><item><title><![CDATA[Building a Java API connecting to LLMs with Spring AI and Ollama local models]]></title><description><![CDATA[Introduction
In the rapidly evolving world of AI, developers often need to integrate multiple AI providers into their applications. Whether you're using local models with Ollama, cloud services like OpenAI, or planning to add Anthropic or Google's Ge...]]></description><link>https://blog.donvitocodes.com/building-a-java-api-connecting-to-llms-with-spring-ai-and-ollama-local-models</link><guid isPermaLink="true">https://blog.donvitocodes.com/building-a-java-api-connecting-to-llms-with-spring-ai-and-ollama-local-models</guid><category><![CDATA[AI]]></category><category><![CDATA[Java]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Sat, 27 Sep 2025 06:17:54 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p>In the rapidly evolving world of AI, developers often need to integrate multiple AI providers into their applications. Whether you're using local models with Ollama, cloud services like OpenAI, or planning to add Anthropic or Google's Gemini, having a unified interface to manage these providers is crucial.</p>
<p>In this tutorial, we'll build a flexible, extensible AI backend using Spring Boot and Spring AI that can seamlessly switch between different AI providers. We'll implement a clean architecture that makes it easy to add new providers without changing existing code.</p>
<h2 id="heading-what-well-build"><strong>What We'll Build</strong></h2>
<p>We're going to create a REST API that:</p>
<ul>
<li><p>Supports multiple AI providers through a unified interface</p>
</li>
<li><p>Allows dynamic provider and model selection per request</p>
</li>
<li><p>Implements a registry pattern for provider management</p>
</li>
<li><p>Provides proper error handling and validation</p>
</li>
<li><p>Uses Spring AI for simplified AI integration</p>
</li>
</ul>
<p>Here's what our architecture will look like:</p>
<pre><code class="lang-mermaid">graph LR
    Client[Client App] --&gt; API[REST API]
    API --&gt; Registry[Provider Registry]
    Registry --&gt; Ollama[Ollama Provider]
    Registry --&gt; Future[Future Providers]
    Ollama --&gt; LLM[Local LLMs]
</code></pre>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before we begin, make sure you have:</p>
<ul>
<li><p>Java 21 or higher installed</p>
</li>
<li><p>Maven installed</p>
</li>
<li><p>Ollama installed and running (for local AI models)</p>
</li>
<li><p>Your favorite IDE (IntelliJ IDEA, VS Code, etc.)</p>
</li>
</ul>
<h2 id="heading-step-1-project-setup"><strong>Step 1: Project Setup</strong></h2>
<p>Let's start by creating a new Spring Boot project. You can use Spring Initializr or create it manually.</p>
<h3 id="heading-11-create-the-project-structure"><strong>1.1 Create the Project Structure</strong></h3>
<pre><code class="lang-bash">mkdir ai-backends-java
<span class="hljs-built_in">cd</span> ai-backends-java
</code></pre>
<h3 id="heading-12-create-the-pomxml"><strong>1.2 Create the</strong> <code>pom.xml</code></h3>
<pre><code class="lang-xml"><span class="hljs-meta">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">project</span> <span class="hljs-attr">xmlns</span>=<span class="hljs-string">"http://maven.apache.org/POM/4.0.0"</span> 
         <span class="hljs-attr">xmlns:xsi</span>=<span class="hljs-string">"http://www.w3.org/2001/XMLSchema-instance"</span>
         <span class="hljs-attr">xsi:schemaLocation</span>=<span class="hljs-string">"http://maven.apache.org/POM/4.0.0 
         https://maven.apache.org/xsd/maven-4.0.0.xsd"</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">modelVersion</span>&gt;</span>4.0.0<span class="hljs-tag">&lt;/<span class="hljs-name">modelVersion</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">parent</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-starter-parent<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>3.5.6<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">relativePath</span>/&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">parent</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>com.aibackends<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>ai<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>0.0.1-SNAPSHOT<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">name</span>&gt;</span>ai<span class="hljs-tag">&lt;/<span class="hljs-name">name</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">description</span>&gt;</span>AIBackends Java<span class="hljs-tag">&lt;/<span class="hljs-name">description</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">properties</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">java.version</span>&gt;</span>21<span class="hljs-tag">&lt;/<span class="hljs-name">java.version</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">spring-ai.version</span>&gt;</span>1.0.2<span class="hljs-tag">&lt;/<span class="hljs-name">spring-ai.version</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">properties</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">dependencies</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-starter-web<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.ai<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-ai-starter-model-ollama<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-starter-test<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">scope</span>&gt;</span>test<span class="hljs-tag">&lt;/<span class="hljs-name">scope</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">dependencies</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">dependencyManagement</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">dependencies</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.ai<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-ai-bom<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>${spring-ai.version}<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">type</span>&gt;</span>pom<span class="hljs-tag">&lt;/<span class="hljs-name">type</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">scope</span>&gt;</span>import<span class="hljs-tag">&lt;/<span class="hljs-name">scope</span>&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependencies</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">dependencyManagement</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">build</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">plugins</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">plugin</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-maven-plugin<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">plugin</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">plugins</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">build</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">project</span>&gt;</span>
</code></pre>
<h3 id="heading-13-create-the-main-application-class"><strong>1.3 Create the Main Application Class</strong></h3>
<p>Create the directory structure and main class:</p>
<pre><code class="lang-bash">mkdir -p src/main/java/com/aibackends/ai
</code></pre>
<pre><code class="lang-java"><span class="hljs-comment">// src/main/java/com/aibackends/ai/AiApplication.java</span>
<span class="hljs-keyword">package</span> com.aibackends.ai;

<span class="hljs-keyword">import</span> org.springframework.boot.SpringApplication;
<span class="hljs-keyword">import</span> org.springframework.boot.autoconfigure.SpringBootApplication;

<span class="hljs-meta">@SpringBootApplication</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AiApplication</span> </span>{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{
        SpringApplication.run(AiApplication.class, args);
    }
}
</code></pre>
<h2 id="heading-step-2-define-the-provider-architecture"><strong>Step 2: Define the Provider Architecture</strong></h2>
<p>Now let's create the core architecture that will allow us to support multiple AI providers.</p>
<h3 id="heading-21-create-the-provider-interface"><strong>2.1 Create the Provider Interface</strong></h3>
<p>First, we'll define an interface that all AI providers must implement:</p>
<pre><code class="lang-java"><span class="hljs-comment">// src/main/java/com/aibackends/ai/provider/ChatProvider.java</span>
<span class="hljs-keyword">package</span> com.aibackends.ai.provider;

<span class="hljs-comment">/**
 * Interface for AI chat providers
 */</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">ChatProvider</span> </span>{

    <span class="hljs-comment">/**
     * Get a chat response from the AI provider
     * 
     * <span class="hljs-doctag">@param</span> message The user's message
     * <span class="hljs-doctag">@param</span> model The model to use (provider-specific)
     * <span class="hljs-doctag">@return</span> The AI's response
     */</span>
    <span class="hljs-function">String <span class="hljs-title">getChatResponse</span><span class="hljs-params">(String message, String model)</span></span>;

    <span class="hljs-comment">/**
     * Get the provider type
     * 
     * <span class="hljs-doctag">@return</span> The provider type enum
     */</span>
    <span class="hljs-function">ProviderType <span class="hljs-title">getProviderType</span><span class="hljs-params">()</span></span>;

    <span class="hljs-comment">/**
     * Check if the provider supports a specific model
     * 
     * <span class="hljs-doctag">@param</span> model The model name to check
     * <span class="hljs-doctag">@return</span> true if the model is supported
     */</span>
    <span class="hljs-function"><span class="hljs-keyword">boolean</span> <span class="hljs-title">supportsModel</span><span class="hljs-params">(String model)</span></span>;

    <span class="hljs-comment">/**
     * Get the default model for this provider
     * 
     * <span class="hljs-doctag">@return</span> The default model name
     */</span>
    <span class="hljs-function">String <span class="hljs-title">getDefaultModel</span><span class="hljs-params">()</span></span>;
}
</code></pre>
<h3 id="heading-22-create-the-provider-type-enum"><strong>2.2 Create the Provider Type Enum</strong></h3>
<p>This enum will represent all supported providers:</p>
<pre><code class="lang-java"><span class="hljs-comment">// src/main/java/com/aibackends/ai/provider/ProviderType.java</span>
<span class="hljs-keyword">package</span> com.aibackends.ai.provider;

<span class="hljs-comment">/**
 * Enum representing supported AI providers
 */</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">enum</span> <span class="hljs-title">ProviderType</span> </span>{
    OLLAMA(<span class="hljs-string">"ollama"</span>),
    ANTHROPIC(<span class="hljs-string">"anthropic"</span>),
    GEMINI(<span class="hljs-string">"gemini"</span>);

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> String value;

    ProviderType(String value) {
        <span class="hljs-keyword">this</span>.value = value;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">getValue</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> value;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> ProviderType <span class="hljs-title">fromValue</span><span class="hljs-params">(String value)</span> </span>{
        <span class="hljs-keyword">for</span> (ProviderType type : ProviderType.values()) {
            <span class="hljs-keyword">if</span> (type.value.equalsIgnoreCase(value)) {
                <span class="hljs-keyword">return</span> type;
            }
        }
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> IllegalArgumentException(<span class="hljs-string">"Unknown provider type: "</span> + value);
    }
}
</code></pre>
<h3 id="heading-23-create-the-provider-registry"><strong>2.3 Create the Provider Registry</strong></h3>
<p>The registry will manage all available providers and allow us to retrieve them dynamically:</p>
<pre><code class="lang-java"><span class="hljs-comment">// src/main/java/com/aibackends/ai/provider/ChatProviderRegistry.java</span>
<span class="hljs-keyword">package</span> com.aibackends.ai.provider;

<span class="hljs-keyword">import</span> org.springframework.stereotype.Component;
<span class="hljs-keyword">import</span> java.util.HashMap;
<span class="hljs-keyword">import</span> java.util.List;
<span class="hljs-keyword">import</span> java.util.Map;
<span class="hljs-keyword">import</span> java.util.Optional;

<span class="hljs-meta">@Component</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">ChatProviderRegistry</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> Map&lt;ProviderType, ChatProvider&gt; providers = <span class="hljs-keyword">new</span> HashMap&lt;&gt;();

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">ChatProviderRegistry</span><span class="hljs-params">(List&lt;ChatProvider&gt; chatProviders)</span> </span>{
        <span class="hljs-comment">// Register all available providers</span>
        <span class="hljs-keyword">for</span> (ChatProvider provider : chatProviders) {
            providers.put(provider.getProviderType(), provider);
        }
    }

    <span class="hljs-comment">/**
     * Get a chat provider by type
     * 
     * <span class="hljs-doctag">@param</span> providerType The provider type
     * <span class="hljs-doctag">@return</span> The chat provider
     * <span class="hljs-doctag">@throws</span> IllegalArgumentException if provider not found
     */</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> ChatProvider <span class="hljs-title">getProvider</span><span class="hljs-params">(ProviderType providerType)</span> </span>{
        <span class="hljs-keyword">return</span> Optional.ofNullable(providers.get(providerType))
                .orElseThrow(() -&gt; <span class="hljs-keyword">new</span> IllegalArgumentException(
                        <span class="hljs-string">"Provider not available: "</span> + providerType));
    }

    <span class="hljs-comment">/**
     * Get a chat provider by string value
     * 
     * <span class="hljs-doctag">@param</span> provider The provider name
     * <span class="hljs-doctag">@return</span> The chat provider
     */</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> ChatProvider <span class="hljs-title">getProvider</span><span class="hljs-params">(String provider)</span> </span>{
        ProviderType providerType = ProviderType.fromValue(provider);
        <span class="hljs-keyword">return</span> getProvider(providerType);
    }

    <span class="hljs-comment">/**
     * Check if a provider is available
     * 
     * <span class="hljs-doctag">@param</span> providerType The provider type
     * <span class="hljs-doctag">@return</span> true if available
     */</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">boolean</span> <span class="hljs-title">isProviderAvailable</span><span class="hljs-params">(ProviderType providerType)</span> </span>{
        <span class="hljs-keyword">return</span> providers.containsKey(providerType);
    }

    <span class="hljs-comment">/**
     * Get all available provider types
     * 
     * <span class="hljs-doctag">@return</span> List of available provider types
     */</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> List&lt;ProviderType&gt; <span class="hljs-title">getAvailableProviders</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> providers.keySet().stream().toList();
    }
}
</code></pre>
<h2 id="heading-step-3-implement-the-ollama-provider"><strong>Step 3: Implement the Ollama Provider</strong></h2>
<p>Now let's implement our first AI provider - Ollama, which runs AI models locally.</p>
<h3 id="heading-31-create-the-ollama-service"><strong>3.1 Create the Ollama Service</strong></h3>
<pre><code class="lang-java"><span class="hljs-comment">// src/main/java/com/aibackends/ai/service/OllamaChatService.java</span>
<span class="hljs-keyword">package</span> com.aibackends.ai.service;

<span class="hljs-keyword">import</span> com.aibackends.ai.provider.ChatProvider;
<span class="hljs-keyword">import</span> com.aibackends.ai.provider.ProviderType;
<span class="hljs-keyword">import</span> org.springframework.ai.chat.client.ChatClient;
<span class="hljs-keyword">import</span> org.springframework.ai.ollama.OllamaChatModel;
<span class="hljs-keyword">import</span> org.springframework.ai.ollama.api.OllamaOptions;
<span class="hljs-keyword">import</span> org.springframework.stereotype.Service;

<span class="hljs-keyword">import</span> java.util.List;

<span class="hljs-meta">@Service</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">OllamaChatService</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">ChatProvider</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> OllamaChatModel ollamaChatModel;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> List&lt;String&gt; supportedModels = List.of(
            <span class="hljs-string">"llama3.2"</span>, <span class="hljs-string">"llama3.1"</span>, <span class="hljs-string">"llama3"</span>, <span class="hljs-string">"llama2"</span>,
            <span class="hljs-string">"mistral"</span>, <span class="hljs-string">"mixtral"</span>, <span class="hljs-string">"codellama"</span>, <span class="hljs-string">"gemma"</span>,
            <span class="hljs-string">"phi3"</span>, <span class="hljs-string">"qwen2.5"</span>, <span class="hljs-string">"deepseek-coder-v2"</span>
    );

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">OllamaChatService</span><span class="hljs-params">(OllamaChatModel ollamaChatModel)</span> </span>{
        <span class="hljs-keyword">this</span>.ollamaChatModel = ollamaChatModel;
    }

    <span class="hljs-meta">@Override</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">getChatResponse</span><span class="hljs-params">(String message, String model)</span> </span>{
        <span class="hljs-comment">// Set the model in options</span>
        String modelToUse = model != <span class="hljs-keyword">null</span> ? model : getDefaultModel();
        OllamaOptions options = OllamaOptions.builder()
                .model(modelToUse)
                .build();

        <span class="hljs-comment">// Create a new ChatClient with the specified model</span>
        <span class="hljs-keyword">var</span> chatClient = ChatClient.builder(ollamaChatModel)
                .defaultOptions(options)
                .build();

        <span class="hljs-keyword">return</span> chatClient.prompt()
                .user(message)
                .call()
                .content();
    }

    <span class="hljs-meta">@Override</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> ProviderType <span class="hljs-title">getProviderType</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> ProviderType.OLLAMA;
    }

    <span class="hljs-meta">@Override</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">boolean</span> <span class="hljs-title">supportsModel</span><span class="hljs-params">(String model)</span> </span>{
        <span class="hljs-keyword">return</span> supportedModels.stream()
                .anyMatch(m -&gt; m.equalsIgnoreCase(model));
    }

    <span class="hljs-meta">@Override</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">getDefaultModel</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> <span class="hljs-string">"llama3.2"</span>;
    }
}
</code></pre>
<h3 id="heading-32-create-configuration"><strong>3.2 Create Configuration</strong></h3>
<pre><code class="lang-java"><span class="hljs-comment">// src/main/java/com/aibackends/ai/config/OllamaConfig.java</span>
<span class="hljs-keyword">package</span> com.aibackends.ai.config;

<span class="hljs-keyword">import</span> org.springframework.context.annotation.Configuration;

<span class="hljs-meta">@Configuration</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">OllamaConfig</span> </span>{
    <span class="hljs-comment">// Spring AI auto-configuration handles the Ollama beans</span>
    <span class="hljs-comment">// No manual configuration needed when using spring-ai-starter-model-ollama</span>
}
</code></pre>
<h2 id="heading-step-4-create-the-rest-api"><strong>Step 4: Create the REST API</strong></h2>
<p>Now let's create the REST controller that will expose our AI services.</p>
<h3 id="heading-41-create-the-controller"><strong>4.1 Create the Controller</strong></h3>
<pre><code class="lang-java"><span class="hljs-comment">// src/main/java/com/aibackends/ai/controller/AIController.java</span>
<span class="hljs-keyword">package</span> com.aibackends.ai.controller;

<span class="hljs-keyword">import</span> com.aibackends.ai.provider.ChatProvider;
<span class="hljs-keyword">import</span> com.aibackends.ai.provider.ChatProviderRegistry;
<span class="hljs-keyword">import</span> com.aibackends.ai.provider.ProviderType;
<span class="hljs-keyword">import</span> org.springframework.http.ResponseEntity;
<span class="hljs-keyword">import</span> org.springframework.web.bind.annotation.*;

<span class="hljs-keyword">import</span> java.util.List;

<span class="hljs-meta">@RestController</span>
<span class="hljs-meta">@RequestMapping("/api")</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AIController</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> ChatProviderRegistry providerRegistry;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">AIController</span><span class="hljs-params">(ChatProviderRegistry providerRegistry)</span> </span>{
        <span class="hljs-keyword">this</span>.providerRegistry = providerRegistry;
    }

    <span class="hljs-meta">@PostMapping("/chat")</span>
    <span class="hljs-keyword">public</span> ResponseEntity&lt;?&gt; chat(<span class="hljs-meta">@RequestBody</span> ChatRequest request) {
        <span class="hljs-keyword">try</span> {
            <span class="hljs-comment">// Validate request</span>
            <span class="hljs-keyword">if</span> (request.message() == <span class="hljs-keyword">null</span> || request.message().isBlank()) {
                <span class="hljs-keyword">return</span> ResponseEntity.badRequest()
                        .body(<span class="hljs-keyword">new</span> ErrorResponse(<span class="hljs-string">"Message cannot be empty"</span>));
            }

            <span class="hljs-keyword">if</span> (request.provider() == <span class="hljs-keyword">null</span> || request.provider().isBlank()) {
                <span class="hljs-keyword">return</span> ResponseEntity.badRequest()
                        .body(<span class="hljs-keyword">new</span> ErrorResponse(<span class="hljs-string">"Provider must be specified"</span>));
            }

            <span class="hljs-comment">// Get the provider</span>
            ChatProvider chatProvider = providerRegistry.getProvider(request.provider());

            <span class="hljs-comment">// Validate model if specified</span>
            <span class="hljs-keyword">if</span> (request.model() != <span class="hljs-keyword">null</span> &amp;&amp; !request.model().isBlank() 
                    &amp;&amp; !chatProvider.supportsModel(request.model())) {
                <span class="hljs-keyword">return</span> ResponseEntity.badRequest()
                        .body(<span class="hljs-keyword">new</span> ErrorResponse(<span class="hljs-string">"Model '"</span> + request.model() + 
                                <span class="hljs-string">"' is not supported by provider '"</span> + request.provider() + <span class="hljs-string">"'"</span>));
            }

            <span class="hljs-comment">// Get the response</span>
            String response = chatProvider.getChatResponse(
                    request.message(), 
                    request.model()
            );

            <span class="hljs-keyword">return</span> ResponseEntity.ok(<span class="hljs-keyword">new</span> ChatResponse(
                    response,
                    request.provider(),
                    request.model() != <span class="hljs-keyword">null</span> ? request.model() : chatProvider.getDefaultModel()
            ));

        } <span class="hljs-keyword">catch</span> (IllegalArgumentException e) {
            <span class="hljs-keyword">return</span> ResponseEntity.badRequest()
                    .body(<span class="hljs-keyword">new</span> ErrorResponse(e.getMessage()));
        } <span class="hljs-keyword">catch</span> (Exception e) {
            <span class="hljs-keyword">return</span> ResponseEntity.internalServerError()
                    .body(<span class="hljs-keyword">new</span> ErrorResponse(<span class="hljs-string">"Internal server error: "</span> + e.getMessage()));
        }
    }

    <span class="hljs-meta">@GetMapping("/providers")</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> ResponseEntity&lt;ProvidersResponse&gt; <span class="hljs-title">getProviders</span><span class="hljs-params">()</span> </span>{
        List&lt;ProviderInfo&gt; providers = providerRegistry.getAvailableProviders().stream()
                .map(providerType -&gt; {
                    ChatProvider provider = providerRegistry.getProvider(providerType);
                    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> ProviderInfo(
                            providerType.getValue(),
                            provider.getDefaultModel()
                    );
                })
                .toList();

        <span class="hljs-keyword">return</span> ResponseEntity.ok(<span class="hljs-keyword">new</span> ProvidersResponse(providers));
    }

    <span class="hljs-meta">@GetMapping("/providers/{provider}/models")</span>
    <span class="hljs-keyword">public</span> ResponseEntity&lt;?&gt; getProviderModels(<span class="hljs-meta">@PathVariable</span> String provider) {
        <span class="hljs-keyword">try</span> {
            ChatProvider chatProvider = providerRegistry.getProvider(provider);

            <span class="hljs-comment">// For now, return a basic response. In a real implementation,</span>
            <span class="hljs-comment">// each provider would have a method to list available models</span>
            <span class="hljs-keyword">return</span> ResponseEntity.ok(<span class="hljs-keyword">new</span> ModelsResponse(
                    provider,
                    List.of(chatProvider.getDefaultModel()),
                    chatProvider.getDefaultModel()
            ));

        } <span class="hljs-keyword">catch</span> (IllegalArgumentException e) {
            <span class="hljs-keyword">return</span> ResponseEntity.badRequest()
                    .body(<span class="hljs-keyword">new</span> ErrorResponse(e.getMessage()));
        }
    }

    <span class="hljs-comment">// Request/Response DTOs</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">ChatRequest</span><span class="hljs-params">(
            String message,
            String provider,
            String model
    )</span> </span>{}

    <span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">ChatResponse</span><span class="hljs-params">(
            String response,
            String provider,
            String model
    )</span> </span>{}

    <span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">ErrorResponse</span><span class="hljs-params">(String error)</span> </span>{}

    <span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">ProvidersResponse</span><span class="hljs-params">(List&lt;ProviderInfo&gt; providers)</span> </span>{}

    <span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">ProviderInfo</span><span class="hljs-params">(
            String name,
            String defaultModel
    )</span> </span>{}

    <span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">ModelsResponse</span><span class="hljs-params">(
            String provider,
            List&lt;String&gt; models,
            String defaultModel
    )</span> </span>{}
}
</code></pre>
<h2 id="heading-step-5-configure-the-application"><strong>Step 5: Configure the Application</strong></h2>
<p>Create the application configuration file:</p>
<pre><code class="lang-java"># src/main/resources/application.properties
server.port=<span class="hljs-number">8085</span>

spring.application.name=ai-backends

# Ollama configuration
spring.ai.ollama.base-url=http:<span class="hljs-comment">//localhost:11434</span>
spring.ai.ollama.chat.model=llama3.<span class="hljs-number">2</span>

# Logging
logging.level.com.aibackends=DEBUG
</code></pre>
<h2 id="heading-step-6-test-the-application"><strong>Step 6: Test the Application</strong></h2>
<h3 id="heading-61-start-ollama"><strong>6.1 Start Ollama</strong></h3>
<p>First, make sure Ollama is running and has a model installed:</p>
<pre><code class="lang-java"># <span class="hljs-function">Install <span class="hljs-title">Ollama</span> <span class="hljs-params">(<span class="hljs-keyword">if</span> not already installed)</span>
# Visit https:<span class="hljs-comment">//ollama.ai for installation instructions</span>

# Pull a model
ollama pull llama3.2

# Start <span class="hljs-title">Ollama</span> <span class="hljs-params">(usually starts automatically)</span>
ollama serve</span>
</code></pre>
<h3 id="heading-62-run-the-application"><strong>6.2 Run the Application</strong></h3>
<pre><code class="lang-bash">./mvnw spring-boot:run
</code></pre>
<h3 id="heading-63-test-the-endpoints"><strong>6.3 Test the Endpoints</strong></h3>
<h4 id="heading-list-available-providers"><strong>List Available Providers</strong></h4>
<pre><code class="lang-bash">curl http://localhost:8085/api/providers | jq .
</code></pre>
<p>Response:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"providers"</span>: [
    {
      <span class="hljs-attr">"name"</span>: <span class="hljs-string">"ollama"</span>,
      <span class="hljs-attr">"defaultModel"</span>: <span class="hljs-string">"llama3.2"</span>
    }
  ]
}
</code></pre>
<h4 id="heading-send-a-chat-request"><strong>Send a Chat Request</strong></h4>
<pre><code class="lang-bash">curl -X POST http://localhost:8085/api/chat \
  -H <span class="hljs-string">"Content-Type: application/json"</span> \
  -d <span class="hljs-string">'{
    "message": "Hello! What is 2 + 2?",
    "provider": "ollama",
    "model": "llama3.2"
  }'</span> | jq .
</code></pre>
<p>Response:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"response"</span>: <span class="hljs-string">"The answer to 2 + 2 is 4."</span>,
  <span class="hljs-attr">"provider"</span>: <span class="hljs-string">"ollama"</span>,
  <span class="hljs-attr">"model"</span>: <span class="hljs-string">"llama3.2"</span>
}
</code></pre>
<h2 id="heading-step-7-adding-new-providers"><strong>Step 7: Adding New Providers</strong></h2>
<p>The beauty of this architecture is how easy it is to add new providers. Let's see how you would add OpenAI support:</p>
<h3 id="heading-71-add-the-dependency"><strong>7.1 Add the Dependency</strong></h3>
<p>Add to your <code>pom.xml</code>:</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.ai<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-ai-starter-model-openai<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
</code></pre>
<h3 id="heading-72-create-the-provider-implementation"><strong>7.2 Create the Provider Implementation</strong></h3>
<pre><code class="lang-java"><span class="hljs-meta">@Service</span>
<span class="hljs-meta">@ConditionalOnProperty(name = "spring.ai.openai.api-key")</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">OpenAIChatService</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">ChatProvider</span> </span>{

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> OpenAiChatModel openAiChatModel;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">OpenAIChatService</span><span class="hljs-params">(OpenAiChatModel openAiChatModel)</span> </span>{
        <span class="hljs-keyword">this</span>.openAiChatModel = openAiChatModel;
    }

    <span class="hljs-meta">@Override</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> String <span class="hljs-title">getChatResponse</span><span class="hljs-params">(String message, String model)</span> </span>{
        <span class="hljs-comment">// Implementation similar to Ollama</span>
    }

    <span class="hljs-meta">@Override</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> ProviderType <span class="hljs-title">getProviderType</span><span class="hljs-params">()</span> </span>{
        <span class="hljs-keyword">return</span> ProviderType.OPENAI;
    }

    <span class="hljs-comment">// ... other methods</span>
}
</code></pre>
<h3 id="heading-73-add-to-provider-type-enum"><strong>7.3 Add to Provider Type Enum</strong></h3>
<pre><code class="lang-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">enum</span> <span class="hljs-title">ProviderType</span> </span>{
    OLLAMA(<span class="hljs-string">"ollama"</span>),
    OPENAI(<span class="hljs-string">"openai"</span>),  <span class="hljs-comment">// Add this</span>
    ANTHROPIC(<span class="hljs-string">"anthropic"</span>),
    GEMINI(<span class="hljs-string">"gemini"</span>);
    <span class="hljs-comment">// ... rest of the enum</span>
}
</code></pre>
<h3 id="heading-74-configure-in-applicationproperties"><strong>7.4 Configure in application.properties</strong></h3>
<pre><code class="lang-bash"><span class="hljs-comment"># OpenAI configuration</span>
spring.ai.openai.api-key=your-api-key-here
spring.ai.openai.chat.options.model=gpt-3.5-turbo
</code></pre>
<p>That's it! The provider will automatically be registered and available through the API.</p>
<h2 id="heading-advanced-features"><strong>Advanced Features</strong></h2>
<h3 id="heading-error-handling"><strong>Error Handling</strong></h3>
<p>Our implementation includes comprehensive error handling:</p>
<ul>
<li><p>Validation for empty messages</p>
</li>
<li><p>Provider validation</p>
</li>
<li><p>Model validation</p>
</li>
<li><p>Graceful handling of provider errors</p>
</li>
</ul>
<h3 id="heading-model-selection"><strong>Model Selection</strong></h3>
<p>Each request can specify a different model:</p>
<pre><code class="lang-typescript">{
  <span class="hljs-string">"message"</span>: <span class="hljs-string">"Write a poem"</span>,
  <span class="hljs-string">"provider"</span>: <span class="hljs-string">"ollama"</span>,
  <span class="hljs-string">"model"</span>: <span class="hljs-string">"mistral"</span>
}
</code></pre>
<h3 id="heading-provider-discovery"><strong>Provider Discovery</strong></h3>
<p>The <code>/api/providers</code> endpoint allows clients to discover available providers dynamically.</p>
<h2 id="heading-best-practices"><strong>Best Practices</strong></h2>
<ol>
<li><p><strong>Interface Segregation</strong>: The <code>ChatProvider</code> interface is focused and specific</p>
</li>
<li><p><strong>Dependency Injection</strong>: Spring manages all dependencies automatically</p>
</li>
<li><p><strong>Error Handling</strong>: All errors are handled gracefully with appropriate HTTP status codes</p>
</li>
<li><p><strong>Extensibility</strong>: New providers can be added without modifying existing code</p>
</li>
<li><p><strong>Configuration</strong>: Each provider can be configured independently</p>
</li>
</ol>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>We've built a flexible, extensible AI backend that can work with multiple AI providers. The architecture we've implemented makes it easy to:</p>
<ul>
<li><p>Add new AI providers without changing existing code</p>
</li>
<li><p>Switch between providers dynamically</p>
</li>
<li><p>Handle errors gracefully</p>
</li>
<li><p>Validate requests properly</p>
</li>
<li><p>Discover available providers and models</p>
</li>
</ul>
<p>This approach gives you the flexibility to use local models for development and privacy-sensitive applications while being able to switch to cloud providers for production or when more powerful models are needed.</p>
<p>Happy coding! 🚀</p>
]]></content:encoded></item><item><title><![CDATA[Java-Based AI Solutions for Enterprises: Viability, Use Cases, and Market Outlook]]></title><description><![CDATA[Introduction: Java’s Role in Enterprise AI
Java has long been the backbone of enterprise systems, valued for its security, reliability, scalability, and platform independence. These very qualities that made Java the “engine of the enterprise” are now...]]></description><link>https://blog.donvitocodes.com/java-based-ai-solutions-for-enterprises-viability-use-cases-and-market-outlook</link><guid isPermaLink="true">https://blog.donvitocodes.com/java-based-ai-solutions-for-enterprises-viability-use-cases-and-market-outlook</guid><category><![CDATA[AI]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Sat, 20 Sep 2025 12:50:20 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-introduction-javas-role-in-enterprise-ai"><strong>Introduction: Java’s Role in Enterprise AI</strong></h2>
<p>Java has long been the backbone of enterprise systems, valued for its security, reliability, scalability, and platform independence. These very qualities that made Java the <strong>“engine of the enterprise”</strong> are now critical for building AI solutions businesses can trust in operations . In fact, enterprises demand <strong>production-grade AI</strong> that is secure, maintainable, and <strong>smoothly integrates</strong> with existing systems . This creates a unique opportunity: Java’s mature ecosystem and stability can <strong>bridge the gap</strong> between cutting-edge AI models and legacy enterprise infrastructure. Modern Java frameworks (e.g. Quarkus, Spring Boot) and new libraries (e.g. LangChain4j, Spring AI) have emerged to simplify integrating large language models (LLMs) and other AI into Java applications . In short, a Java-based AI solution is not only viable – it aligns well with enterprise IT standards and leverages skills many enterprise developers already have. The <strong>market timing is ideal</strong> too: organizations across the globe are eager to adopt AI, and they are looking for solutions that can plug into their existing workflows with minimal disruption.</p>
<h2 id="heading-enterprise-needs-and-ai-adoption-trends"><strong>Enterprise Needs and AI Adoption Trends</strong></h2>
<p><strong>What Enterprises Want from AI:</strong> Enterprises are broadly convinced of AI’s potential, but they have clear needs and concerns that any solution must address. <strong>Trust, transparency, and governance</strong> are top priorities – nearly half of businesses worry about data accuracy and bias in AI outputs . Ensuring AI systems are explainable, fair, and compliant is essential for adoption. Data security and privacy are equally critical: ~40% of organizations cite <strong>confidentiality concerns</strong> as a barrier, which underscores demand for on-premises or hybrid solutions that keep sensitive data in-house . <strong>Data readiness</strong> is another need – 42% of enterprises feel they lack sufficient proprietary data to train or customize AI models . This drives interest in solutions that can leverage existing data efficiently (or use techniques like synthetic data generation and federated learning to overcome data gaps ). There’s also a well-documented <strong>skills gap</strong>: 42% report inadequate in-house AI expertise . Enterprises need higher-level tools and partner support to implement AI without hiring rare (and expensive) talent for every project. Finally, business leaders demand <strong>clear ROI and use-case alignment</strong> – 42% say they struggle to justify AI projects financially . They seek AI solutions targeting specific pain points (cost reduction, revenue growth, productivity boosts) with measurable outcomes, rather than “AI for AI’s sake.”</p>
<p><strong>Adoption is Accelerating:</strong> Despite the challenges, enterprise AI adoption has <strong>skyrocketed</strong> in the last couple of years. By 2024, <em>nearly 80%</em> of organizations worldwide were engaging with AI in some form – <strong>35% had fully deployed AI</strong> in at least one function and another 42% were actively piloting projects . Only a small minority (13%) have no AI plans at all . Surveys also show that <strong>AI budgets are swelling</strong>. In 2025, 84% of companies planned to increase funding for AI initiatives (up from 73% a year prior) . Large enterprises have moved beyond small pilots – generative AI and ML are now becoming <strong>line items in core IT budgets</strong> rather than experimental spends . One CIO noted that <em>“what I spent in 2023 I now spend in a week”</em> on AI, reflecting the rapid scale-up of investment . This budget growth is fueled by results: firms are discovering more internal use cases and even deploying customer-facing AI features, which drive tangible business value . Indeed, top-quartile AI adopters report <strong>15–30% improvements</strong> in key metrics (productivity, customer satisfaction, etc.) across workflows enhanced by AI .</p>
<p><strong>Global and Regional Momentum:</strong> The AI wave is global, with particularly strong momentum in North America and Asia. Asia-Pacific (APAC) enterprises are racing ahead – APAC’s generative AI adoption now ranks <strong>just behind North America</strong>, and Asian firms are investing very heavily to scale successful use cases . A survey of Asian business leaders found over <strong>90% plan to scale up GenAI</strong> projects in the next two years to drive cost savings and revenue gains . Key markets like China and India are championing AI from the C-suite, upskilling their workforce, and aligning AI tightly with business goals . This suggests a <strong>huge opportunity</strong> in Asian markets for enterprise AI solutions, as organizations there are eager for partnerships to achieve their AI ambitions. (Notably, more than half of APAC companies surveyed plan to work with external partners to expand AI capabilities .)</p>
<p><strong>Demand for Integrated Solutions:</strong> Across regions, one clear trend is that enterprises want <strong>solutions, not raw tech</strong>. A recent report found 98% of executives expect vendors to help shape their AI vision and strategy – yet most feel current vendors fall short in <strong>hands-on partnership</strong> . Companies don’t just want an AI API; they want guidance, industry-specific insights, and products that seamlessly integrate into their existing processes. In practice, this means a Java AI solution should be packaged in an enterprise-friendly way (discussed more below) and come with support for integration, customization, and governance features out-of-the-box. It should also be flexible deployment-wise, since many firms need to run AI on their own infrastructure for compliance reasons. In short, the market is <strong>hungry for robust AI solutions</strong> that can be trusted within enterprise environments – and Java, with its enterprise pedigree, is well positioned to deliver on those needs.</p>
<h2 id="heading-ai-use-cases-for-enterprises-text-and-vision"><strong>AI Use Cases for Enterprises (Text and Vision)</strong></h2>
<p>AI can drive value in virtually every industry. Below we highlight key <strong>use cases</strong> – focusing on <strong>Natural Language Processing (NLP)</strong> for text-based intelligence and <strong>Computer Vision</strong> for image/video analysis – that a Java-based AI solution could tackle. These use cases reflect common enterprise pain points that AI is already helping solve:</p>
<h3 id="heading-nlp-and-text-based-use-cases"><strong>NLP and Text-Based Use Cases</strong></h3>
<ul>
<li><p><strong>Enterprise Knowledge Base Q&amp;A:</strong> Organizations struggle with information siloed across SharePoint, Confluence, email, etc. Employees waste time searching for answers, and important knowledge is often missed. An AI-powered <strong>internal Q&amp;A system</strong> can use Retrieval-Augmented Generation (RAG) to answer employees’ questions based on internal documents  . This means ingesting and indexing all company docs, and using an LLM to provide natural-language answers with citations. The benefit is faster onboarding and decision-making – turning weeks of digging into a quick chat with an AI assistant. <em>Java integration advantage:</em> Java’s strength in I/O and security is ideal for building the robust pipelines needed to ingest thousands of documents from legacy systems (with proper access controls)  . This use case appears in many industries (e.g. a bank’s policy manuals, a manufacturer’s engineering specs, etc.) where unlocking internal knowledge has direct productivity ROI.</p>
</li>
<li><p><strong>Document Data Extraction and Processing:</strong> Enterprises drown in unstructured text – contracts, invoices, reports – that require manual data entry or review. AI can automate this. For example, extracting structured data from documents (PDFs, emails, forms) using NLP models can eliminate tedious human entry. One common scenario is processing invoices or purchase orders: an AI model can read incoming documents and pull out key fields (vendor, amounts, line items) for input into ERP systems. This reduces errors and speeds up workflows. Another example is parsing <strong>legal documents</strong> or contracts to identify clauses and compliance issues automatically. In finance, there are successful cases of AI reading loan applications or financial statements to assist analysts. <em>Java integration:</em> Tools like DL4J or TensorFlow Java can be used to run document parsing models within a Java service, and results can be mapped directly into Java objects (POJOs) for downstream processing . This makes it easy to plug AI-extraction into existing Java-based workflow systems.</p>
</li>
<li><p><strong>Customer Support Chatbots and Virtual Agents:</strong> NLP-driven <strong>chatbots</strong> are a popular AI entry point for many enterprises. These range from simple FAQ bots to advanced virtual agents that handle customer inquiries, IT helpdesk tasks, or HR queries. Modern generative AI can produce far more fluent and context-aware responses than the chatbots of old. For instance, a retail company might deploy an AI chat agent to handle order tracking questions, returns, and product FAQs across web and mobile channels. AI agents can also escalate to human reps when needed, creating a hybrid customer service model. The value is 24/7 support, faster response times, and lower call center workloads. In fact, NLP chatbots can automate ~35% of routine support tasks, leading to substantial cost savings  . <em>Java integration:</em> Enterprises often want chatbot systems integrated with their backend data (order databases, CRM, etc.). A Java-based AI solution can shine here by securely connecting the AI to internal systems. For example, one approach is using an <strong>agent with tools</strong> design: expose certain Java backend services (like “lookup order status”) as tools the AI can invoke . This was identified as a key use case in one Java AI guide – enabling an LLM-powered support agent to not just answer queries, but perform actions like checking account balances via existing Java APIs . Java’s strong security and type safety help ensure the AI only performs allowed actions. Many companies (banks, telcos, e-commerce) are already integrating AI assistants this way to improve customer service while keeping systems secure.</p>
</li>
<li><p><strong>Content Generation and Summarization:</strong> Textual content is everywhere in enterprise: marketing copy, internal reports, research summaries, product descriptions, and more. Generative AI (like GPT-style models) provides a way to <strong>automate content creation</strong> or assist humans with first drafts. Use cases include generating personalized marketing emails, writing knowledge base articles based on bullet points, or creating product descriptions at scale. Summarization is equally valuable – e.g. automatically summarizing lengthy reports or meeting transcripts into concise briefs. This is especially useful in industries like consulting (summarizing research for clients) or media (summarizing news). Many companies are already leveraging AI writers to save time; for instance, Salesforce has employed an internal generative AI (via a platform called Writer) for content and saw four times the expected ROI, including ~$700k saved in the first year . <em>Java integration:</em> A Java AI solution might expose a <strong>content generation API</strong> (REST endpoints) that internal applications can call, or even integrate into content management systems. With frameworks like Quarkus enabling fast, scalable Java microservices, one can deploy robust content generation services that utilize pre-trained language models (possibly via Java bindings to an OpenAI or local model). Ensuring the generation is “grounded” in enterprise data (via RAG or templates) will be important for accuracy . Because Java is highly scalable, a Java service can handle high-throughput content requests in production – e.g. generating thousands of product summaries for an e-commerce site overnight.</p>
</li>
<li><p><strong>Sentiment Analysis and Voice of Customer:</strong> Understanding customer sentiment at scale is another text AI use case. This could involve <strong>analyzing customer feedback</strong> from surveys, social media, support tickets, or reviews using NLP. For example, a bank might analyze millions of customer feedback comments to detect common pain points or track sentiment trends after a new fee is introduced. AI-driven sentiment analysis can categorize feedback (positive, negative, neutral) and even extract themes (e.g. “customer service wait time” complaints). This helps businesses respond faster to issues and prioritize improvements. <em>Java integration:</em> Sentiment models (like BERT-based classifiers) can be deployed within Java applications to automatically tag incoming text (emails, chats) and route or log them appropriately. Elder Moraes notes that integrating such <strong>customer sentiment analysis</strong> into high-throughput data pipelines is feasible with Java, outputting results as structured Java objects that downstream systems can use  . Many enterprises already use Java for stream processing (e.g. Apache Kafka Streams), and AI can be inserted into those streams for real-time analytics.</p>
</li>
<li><p><strong>Natural Language to Data Query (NL2SQL/NL2API):</strong> A growing use case is letting business users ask questions in plain English and have the AI retrieve data or insights from company databases/reports. Essentially, <strong>AI as a semantic layer</strong> on enterprise data. For instance, a sales manager might ask, “Which product had the highest growth in Asia last quarter?” and the AI would translate that into a secure database query or API call, then return the result. This empowers non-technical staff to get insights without always requiring a data analyst. Several tools and libraries (including some in Java) are emerging to translate NL to SQL. <em>Java integration:</em> Java-based logic can govern this process to ensure <strong>security and performance</strong>. One reference architecture uses an LLM to draft a SQL query which is then validated by Java code against allowed schemas before execution . This prevents the AI from running unsafe or overly expensive queries. For example, LangChain4j provides patterns for NL-to-API translation that a Java app can implement with type safety. Many industries can benefit: in finance, analysts could query metrics; in retail, regional managers could ask about inventory levels; in healthcare, researchers could query patient stats – all in natural language.</p>
</li>
<li><p><strong>Automated Compliance and Document Review:</strong> In heavily regulated sectors (finance, healthcare, legal), firms spend enormous effort reviewing texts for compliance (regulatory filings, contracts, policies). AI can assist by <strong>reading and flagging documents</strong> that contain certain risks or non-compliant language. For example, an AI system could scan insurance claims and highlight those that possibly violate compliance rules, or compare contract clauses against a company’s standard terms. A case in point: some banks use NLP to analyze communications for compliance breaches. AI can dramatically cut the manual workload here. <em>Java integration:</em> A Java-based solution could incorporate libraries for text analysis to check documents against rulesets. One of the ten use cases described for Java developers is <strong>automating regulatory compliance monitoring</strong> – using AI to parse legal text and compare it to internal policies . The advantage of Java is that existing compliance systems (often Java-based) can call these AI modules, and the AI can output results into familiar formats (reports, case management queues, etc.). This way, companies can trust the AI as part of their established compliance workflow. Moreover, running such solutions on-premises (which Java facilitates) is often non-negotiable in these industries due to data sensitivity.</p>
</li>
</ul>
<p><em>(The above are just a selection of NLP use cases. Others include AI-assisted coding (e.g. AI helping developers in writing Java code – IBM’s</em> <strong><em>watsonx Code Assistant</em></strong> <em>is an example targeted at Java modernization), language translation for global companies, HR resume screening, and more. The unifying theme is that enterprises are applying text analytics and generation wherever it automates routine cognitive work.)</em></p>
<h3 id="heading-computer-vision-use-cases"><strong>Computer Vision Use Cases</strong></h3>
<ul>
<li><p><strong>Video Surveillance and Security Analytics:</strong> Many enterprises use CCTV and surveillance cameras for security, especially in retail, hospitality, transportation, and public sector. AI-based <strong>video analytics</strong> can automatically detect unusual activities, safety violations, or security threats in real time. During the pandemic, companies accelerated investment in smart surveillance – for example, to ensure social distancing or mask compliance via cameras. Today, AI-enabled video systems can recognize patterns and alert authorities to issues (e.g. an intruder in a restricted area or an unattended bag in an airport) . They can also reduce false alarms compared to rule-based systems by using object detection and behavior analysis. <em>Java integration:</em> Deploying vision models at the edge or within a security software suite can be done with Java (using wrappers for libraries like OpenCV or deep learning models). The challenge is processing <strong>massive real-time video feeds</strong>, which requires optimization and scalability. Solutions often involve Java for orchestrating streams and using native code (GPU) for heavy vision tasks. Enterprises may prefer on-premise processing for security video (for privacy and low latency). A Java solution packaged to run on edge servers (with GPU support) could address this use case.</p>
</li>
<li><p><strong>Quality Control in Manufacturing:</strong> <strong>Computer vision for quality inspection</strong> is a game-changer in manufacturing. Cameras on production lines can use AI to spot defects or anomalies in products far faster and more consistently than human inspectors. For instance, an AI vision system can examine circuit boards for soldering defects, or food products for packaging flaws, ejecting any defective item immediately. This improves quality and reduces waste. Some factories have reported significant reduction in defect rates by deploying AI vision systems. Similarly, <strong>safety compliance</strong> can be monitored – e.g. checking if workers are wearing helmets and vests via vision. <em>Java integration:</em> Industrial companies often use Java-based SCADA or MES systems. An AI vision module (perhaps using a JNI bridge to a CNN model) can integrate with these systems to provide alerts and stop machinery when a defect is detected. The <strong>predictive maintenance</strong> use case is closely related: by visually monitoring equipment (thermal cameras, etc.), AI can predict failures. In fact, AI-driven predictive maintenance (not only vision, but also sensor data) has cut downtime for manufacturers – these systems predict faults before they happen. According to one case, AI-based predictive maintenance helped achieve a 22% boost in efficiency in supply chain operations when combined with overall optimizations . Java’s strength in handling IoT data streams and connecting to databases can complement the AI, storing results and triggering maintenance orders.</p>
</li>
<li><p><strong>Medical Imaging and Diagnostics:</strong> In healthcare, <strong>AI vision systems</strong> are assisting doctors in analyzing medical images like X-rays, MRIs, CT scans, and even pathology slides. This is one of the most impactful AI use cases: models can detect diseases (cancers, retinal diseases, etc.) with high accuracy. For example, a nature study showed an AI model (developed by Google Health) could detect breast cancer in mammograms with 94.6% sensitivity, outperforming expert radiologists who achieved 88.0% . Such AI can act as a “second reader,” catching things a human might miss and reducing false positives. Hospitals and diagnostics labs are adopting AI to get faster and more accurate readings, which improves patient outcomes. <em>Java integration:</em> While the AI models are often developed in Python, deploying them in a hospital’s IT system could involve Java. Many healthcare software systems (PACS imaging systems, electronic health record systems) have Java components. A Java-based AI solution could provide a service that takes in an imaging study and returns AI analysis (with visual annotations), which the radiologist can view in their normal workflow. Because patient data is highly sensitive, an on-prem solution is usually required – which favors a Java solution running within the hospital’s secure network. Also, Java’s reliability is important here: doctors need consistent, validated results, so the AI service must be robust.</p>
</li>
<li><p><strong>Facial Recognition and Biometric ID:</strong> Another vision use case is secure access and identity verification using AI. Companies might use facial recognition for physical building access or to verify identity in online processes (e.g. matching a selfie to an ID document for KYC in banking). AI vision can also detect spoofing attempts (ensuring it’s a live person, not a photo). Some airports, for example, use facial recognition for boarding gate identity checks. While this area raises privacy concerns and is regulated in some regions, it’s being adopted for its efficiency. <em>Java integration:</em> Biometric processing often needs to integrate with existing security systems written in Java (for instance, a Java app might interface with camera systems and employee databases). By embedding facial recognition models (via an SDK) into a Java security application, one can build a complete solution that handles capture, AI analysis, and then triggers an action (like unlocking a door or flagging an anomaly). Given Java’s use in many enterprise identity management systems, offering a Java-friendly AI module for this is advantageous.</p>
</li>
<li><p><strong>Retail Analytics and Personalization:</strong> In retail environments (physical and online), vision AI and data AI are combined to enhance customer experience and operations. <strong>In stores</strong>, cameras plus AI can analyze customer foot traffic patterns, dwell time in aisles, or even customer demographics (age/gender) to better understand behavior. On shelves, image recognition can detect out-of-stock products for immediate restocking. Several large retailers in Asia have piloted cashier-less stores where cameras track what items customers pick (like Amazon Go). <strong>Personalization</strong> is more data-driven but sometimes uses vision input (e.g. smart mirrors that recommend outfits). According to industry reports, retailers are leveraging AI for personalized recommendations and dynamic pricing based on real-time data  . AI can analyze purchase history and browsing to tailor offers for each customer, increasing sales. <em>Integration:</em> Retail IT often runs on Java-based platforms (for inventory, CRM, POS systems). A Java AI solution could feed personalized recommendations into e-commerce platforms or power a dynamic pricing engine using reinforcement learning, all within a Java microservice. For in-store vision, results from AI cameras (people counts, heatmaps) could be consumed by Java analytics dashboards for store managers. Scalability is key here due to peaks (holiday rush); AI solutions may use cloud bursts. Indeed, one solution provider noted that with AI and cloud, retailers can scale personalization systems on demand during peak seasons without overprovisioning .</p>
</li>
<li><p><strong>Autonomous Vehicles and Drones:</strong> While more specialized, the transportation sector’s use of vision AI is worth noting. <strong>Autonomous vehicles</strong> rely heavily on real-time computer vision (object detection, lane detection from cameras, LIDAR processing, etc.). Companies like Tesla, Waymo, etc., have advanced this field, and even traditional automakers are embedding driver-assist AI features (adaptive cruise, collision avoidance). For enterprises, this translates to potential use in logistics (e.g. self-driving delivery robots or AI-assisted driving for fleet trucks). Similarly, drones with AI vision are used for inspecting infrastructure (power lines, pipelines) or agricultural fields. These scenarios require processing sensor data on the fly. <em>Java integration:</em> Much of the core AI here is closer to hardware and often in C++/Python, but Java could be used in management software or cloud platforms that aggregate data from vehicles/drones. For instance, a logistics company might have a Java backend system that receives data from AI-equipped drones inspecting warehouses, then uses that to trigger maintenance orders. So while Java might not run on the drone, it plays a role in the broader enterprise workflow integration.</p>
</li>
</ul>
<p><strong>Bottom line:</strong> AI use cases in text and vision are extremely broad – spanning <strong>security, customer experience, operations, and decision support</strong>. The examples above show that many of these use cases have already proven their value (e.g. &gt;50% improvement in fraud detection accuracy using AI , or significant efficiency gains in supply chains ). A Java-based AI business can aim to deliver pre-built solutions or customizable modules for these high-value applications. By focusing initially on NLP and Vision, one covers a large share of enterprise AI needs: from making sense of documents and conversations to interpreting the physical world via images. Crucially, <strong>success stories abound</strong> in each area (some outlined in the next section), which helps in convincing enterprise clients that these AI interventions are not science fiction – they are achievable and often have well-documented ROI.</p>
<h2 id="heading-packaging-and-integrating-a-java-ai-solution-for-enterprise"><strong>Packaging and Integrating a Java AI Solution for Enterprise</strong></h2>
<p>Delivering AI to enterprises isn’t just about model accuracy – <strong>packaging and integration</strong> make the difference between a pilot and a production deployment. To be enterprise-ready, a Java AI solution should consider the following packaging strategies:</p>
<ul>
<li><p><strong>Flexible Deployment (Cloud, On-Premise, Hybrid):</strong> Many enterprises, due to data security or latency needs, will insist on on-premise or private cloud deployment for AI solutions. Others may be open to a vendor-hosted cloud service if data is less sensitive. The solution should be designed so it can <strong>run in the customer’s preferred environment</strong>. Using containerization (Docker/Kubernetes) is now standard to enable this portability. For instance, providing the AI components as container images that can be deployed on a Kubernetes cluster (on cloud or on-prem) gives clients freedom. Red Hat’s OpenShift AI platform emphasizes exactly this: the ability to train and deploy AI/ML workloads on-premises, in any cloud, or at the edge, depending on business requirements  . A Java-based solution can leverage the write-once-run-anywhere nature of the JVM – running on a customer’s Linux servers or their preferred cloud VM with equal ease. This addresses enterprise concerns around data confidentiality (keep it on-prem if needed)  and regulatory compliance (e.g. ensuring data doesn’t leave a region). In practice, a vendor might offer a managed cloud option for convenience but also an on-prem package for customers that need it.</p>
</li>
<li><p><strong>Microservices and APIs:</strong> Modern enterprise architectures favor a <strong>microservice approach</strong> for new capabilities. The AI solution should be packaged as one or more services with clean APIs (REST/GraphQL or even gRPC) that other systems can call. For example, if selling a “document analysis AI”, it might be a service with an endpoint like /extractData or /summarizeDocument. Java frameworks like Quarkus and Spring Boot are excellent for building such services. In fact, Quarkus allows compiling Java apps to native code for fast startup and low memory, making Java services as nimble as ones written in Go or Node for cloud deployment . By offering API access, you make integration easy – enterprises can call the AI from their existing software, regardless of language, as long as they can make an HTTP call. This is crucial because not all parts of a company’s system will be Java; APIs ensure interoperability.</p>
</li>
<li><p><strong>Java SDK/Library Option:</strong> In addition to stand-alone services, providing a <strong>Java SDK</strong> could be valuable for clients that want to deeply embed AI into their own Java applications. For instance, a bank with an existing Java spring application might prefer a jar library they can import to use AI capabilities locally (especially if real-time latency or offline operation is needed). The emergence of libraries like <strong>LangChain4j</strong> (Java’s answer to LangChain) shows the demand for Java-native APIs to work with LLMs and AI pipelines . A well-documented SDK would allow enterprise developers to, say, call AIClient.summarize(text) or stream data through an AnomalyDetector class in Java. This packaging gives maximum control to the client’s dev team to integrate AI in a way that feels native to their codebase. It’s also a differentiator vs many AI startups that only offer Python libraries or SaaS APIs – a Java SDK appeals directly to the large population of Java enterprise developers.</p>
</li>
<li><p><strong>Integration Connectors:</strong> Beyond core algorithms, a sellable enterprise AI solution should include or support connectors to common enterprise data sources and software. For example, if doing an internal knowledge base Q&amp;A, connectors to SharePoint, Confluence, Google Drive, etc., to fetch documents would be needed. If doing vision, connectors to camera systems or an IIoT platform might be relevant. Packaging these <strong>integrations</strong> (or at least providing them as optional modules) greatly eases adoption. Many enterprise software providers succeed by offering a suite of connectors out-of-the-box. As a Java solution, one can utilize the rich ecosystem of Java connectors (JDBC for databases, JMS for messaging systems, etc.). Java’s longevity in enterprise means there are existing libraries for connecting to almost anything – reusing those can speed up development of integration features. This addresses the common enterprise silo issue: the AI won’t be useful if it can’t pull data from where it lives and push insights to where decisions are made.</p>
</li>
<li><p><strong>Security, Access Control, and Governance:</strong> Enterprises will scrutinize the AI solution’s security. Packaging should include robust <strong>authentication and authorization</strong> mechanisms (integration with LDAP/AD for user auth, role-based access controls for features, audit logging of AI usage, etc.). For example, if offering an AI API, ensure it supports OAuth or integration with the client’s SSO. Java’s security APIs and enterprise frameworks (like Spring Security) can be leveraged to embed these controls. Governance features might include the ability to log and review AI decisions – e.g. logging the input and output of models for audit, or providing an admin dashboard to trace how the AI arrived at an answer (to the extent possible). These needs tie back to the trust issue: features that promote <strong>transparency and oversight</strong> will make enterprises more comfortable using AI for important tasks. Therefore, the solution might be packaged with an admin UI or logging service that IT and compliance teams can use to monitor AI activity. Since Java is often used for building internal tooling, it’s straightforward to add modules for these governance aspects (for instance, using a Java web framework to create an admin console).</p>
</li>
<li><p><strong>Support for Monitoring and Scaling:</strong> In production, enterprises will expect the AI solution to have <strong>monitoring hooks, performance dashboards, and scaling options</strong>. Container orchestration (K8s) will handle some scaling, but the solution should expose metrics (via JMX or Prometheus endpoints, for example) so that the company can monitor throughput, latency, error rates of AI components. Packaging should thus include metric instrumentation (e.g. using Micrometer in Spring) and possibly autoscaling guidelines or Helm charts if Kubernetes is used. Many enterprise Java apps use APM tools (like New Relic, AppDynamics); providing compatibility with those for the AI service is a plus. Essentially, treat the AI solution like any enterprise application: it must fit into the IT department’s operational ecosystem. <em>Java’s advantage</em> here is that ops teams are very familiar with running and tuning Java services – the GC, thread pools, etc. – so they can apply the same rigor to the AI service as to other Java apps. This familiarity can reduce the friction of deploying an AI solution (versus, say, a Python-based service which might require different monitoring setups).</p>
</li>
<li><p><strong>Partner-Like Delivery:</strong> As noted, enterprises want vendors who will partner in their AI journey . So beyond the technical packaging, consider packaging the <strong>solution as a service offering</strong> with consulting/training. For example, a business plan might include offering professional services: helping customize models for the client’s domain (fine-tuning an NLP model on their data), training their staff on using the AI tool, and co-developing new use case extensions. Many enterprises are seeking guidance on AI strategy – packaging your solution with an <strong>“AI integration workshop”</strong> or pilot program can differentiate it. This isn’t a software packaging per se, but a go-to-market packaging that aligns with enterprise expectations. It can increase trust that your Java AI solution isn’t a black box you drop off, but rather a living product that will be integrated hand-in-hand with their team.</p>
</li>
</ul>
<p>In summary, <strong>packaging for enterprise means making deployment and integration as easy as possible within existing IT frameworks</strong>. Containerized microservices, Java APIs, connectors, security and monitoring features – these ensure the AI solution can be adopted without weeks of re-engineering the client’s environment. Java is a strong choice for implementing all of this due to its enterprise toolchain maturity. A concrete example: IBM’s own AI offerings (like Watson) often provided on-premise Java-based packages for banks that needed to run AI in-house, showing that this model is feasible and in demand. Similarly, frameworks like Red Hat OpenShift AI highlight that enterprises want platforms that <strong>bring data scientists and Java developers onto one common platform</strong> with consistency in deployment and governance . Your Java solution can ride this wave by presenting itself as an “enterprise AI platform/module” rather than just an algorithm – effectively lowering the barrier for companies to plug AI into their operations.</p>
<h2 id="heading-success-stories-and-case-studies"><strong>Success Stories and Case Studies</strong></h2>
<p>It’s important for a business plan to highlight what has <strong>already been achieved</strong> with AI in enterprises – both to validate the concept and to provide learning from successes. Here we compile a few representative success stories and studies across industries (global and Asian) that showcase enterprise AI integration, many of which could be delivered or enhanced by a Java-centric approach:</p>
<ul>
<li><p><strong>Finance (Fraud Detection):</strong> Financial services have embraced AI to combat fraud in transactions. A Forbes analysis found that AI-based systems improved fraud detection accuracy by <em>over 50%</em> compared to traditional rule-based methods . For example, major credit card networks use machine learning models to analyze millions of transactions in real time, flagging anomalies that humans might miss. These AI systems have saved banks hundreds of millions by preventing fraudulent charges. Notably, they must integrate with legacy banking systems (often written in COBOL or Java). Firms like JP Morgan also leverage AI for other tasks – e.g. AI document review (their COIN tool reportedly saved 360k hours of legal work by auto-reviewing loan documents). Many banks run Java-heavy IT stacks, so incorporating AI via Java services was a natural path. The success in fraud detection is an ideal <strong>proof point</strong> for AI’s ROI – reducing losses and operational costs while improving security.</p>
</li>
<li><p><strong>Healthcare (Diagnostics and Operations):</strong> We already mentioned AI’s performance in medical imaging. To add: deployments of AI in radiology at hospitals in the US, Europe, and Asia have shown measurable impact. For instance, a large hospital in India using an AI chest X-ray tool saw a significant increase in tuberculosis detection in early stages, enabling earlier treatment. A UK hospital network using an AI stroke detection on brain scans managed to cut average diagnosis time from 3 hours to 15 minutes. These are life-saving improvements. A <strong>study in Nature</strong> demonstrated an AI model exceeding radiologist accuracy in breast cancer detection (94.6% vs 88.0% sensitivity) , suggesting AI can augment even expert professionals. Beyond imaging, hospitals use AI (often via natural language processing) to optimize operations – e.g. predicting patient no-shows, automating appointment scheduling via chatbot, and transcribing doctors’ notes. One success story is <strong>Vizient (US)</strong>, a healthcare performance improvement company: they partnered with an enterprise AI vendor (Writer) and achieved four times the estimated ROI on generative AI applications, saving about <strong>$700k in the first year</strong> while empowering employees across departments to use AI in their workflows . This underscores that with the right strategy and tools, AI can quickly pay for itself in healthcare administration. Java comes into play since many hospital IT systems (appointment systems, EHR integration layers) are Java-based – the AI solutions often had to integrate seamlessly, which they did, proving the viability of adding AI without ripping out existing systems.</p>
</li>
<li><p><strong>Manufacturing (Industry 4.0):</strong> In manufacturing and supply chain, there are numerous case studies. <strong>Siemens</strong>, for example, implemented AI-driven predictive maintenance across its factories, which reduced unplanned downtime by 20-30% on certain production lines. <strong>Bosch</strong> applied computer vision in visual inspection and reported higher defect capture rates, improving product quality with minimal increase in inspection time. A <strong>Capgemini study</strong> noted that <em>68% of supply chain organizations</em> have implemented AI-based traceability/visibility solutions, leading to an average <strong>22% increase in efficiency</strong> in those operations . One success story in Asia is a Japanese automotive manufacturer that used AI vision to detect tiny paint imperfections on car bodies – something previously done by human inspectors – and they achieved near-zero customer complaints about paint quality after deployment. These successes often involve AI alongside IoT (sensors, cameras) in an “Industry 4.0” paradigm, and integration with MES/SCADA systems (where Java is common) was key. The overall potential savings in manufacturing from AI (in maintenance, yield improvement, inventory optimization) run into billions globally, and leading firms have demonstrated these gains in practice, setting a blueprint for others.</p>
</li>
<li><p><strong>Retail and Customer Experience:</strong> Retailers have seen both top-line and bottom-line benefits from AI. For example, <strong>Amazon’s recommendation engine</strong> (an early AI success) drives an estimated 30% of its e-commerce revenue through personalized suggestions. Brick-and-mortar retailers in Asia (like Alibaba’s Hema supermarkets in China) use AI to manage inventory and logistics in real time, ensuring fresh products and efficient supply. <strong>Walmart</strong> uses AI for optimizing store layouts and on-shelf availability (even using robots for shelf scanning). On the customer front, many retailers and telecom companies implemented AI chatbots to handle support – <em>Vodafone</em> deployed a customer service bot (“TOBi”) which resolved large volumes of inquiries, leading to higher customer satisfaction and millions in annual savings. These cases show that AI can improve customer experience while lowering service costs. Importantly, companies like Salesforce have integrated generative AI features into their CRM products, and clients (like Mars and Accenture) are using those to auto-generate marketing content or sales emails. In one anecdote, <strong>Salesforce</strong> internally empowered 50+ “AI champions” to build AI apps for their workflows using a low-code AI studio. That cultural approach, alongside technology, led to high adoption internally. From a Java perspective: many e-commerce platforms (like those using SAP Hybris or custom Java eCommerce engines) have integrated AI modules for recommendations and search (often via Java-based microservices calling ML models). The success here is measured in improved conversion rates and basket sizes – and several studies show personalized recommendations can boost sales by 5-15%. Given retail’s thin margins, that’s significant.</p>
</li>
<li><p><strong>Public Sector and Others:</strong> AI success stories aren’t limited to profit-driven companies. Government agencies have used NLP to automate paperwork (e.g. an Australian agency using AI to process citizen emails saw response times drop from weeks to hours). In Singapore, the government deployed computer vision AI to monitor and predict traffic conditions, which improved traffic flow and informed infrastructure planning (part of the “smart nation” initiative). In agriculture, companies use vision AI via drones to monitor crop health over large areas, increasing yields by identifying issues early (pilots in India and Africa have shown crop yield improvements of ~15% by targeted interventions guided by AI analysis). Even in legal services, startups like <strong>Harvey</strong> (built on GPT) have shown that AI copilots can save lawyers significant time on research and drafting – some law firms report 20-50% time savings on certain tasks using these tools. Many of these emerging applications are in their early stages but have had <strong>convincing pilot results</strong>. The pattern across all these is clear: when AI is applied thoughtfully to a well-defined use case, it <strong>delivers efficiency, accuracy, or scalability improvements</strong> that were hard to achieve otherwise.</p>
</li>
</ul>
<p>It’s also instructive to note <strong>why some AI pilots succeeded</strong> where others failed. Common success factors include: executive sponsorship and alignment with business goals (BCG found Asian companies with CEO-level AI sponsorship scaled AI more successfully) ; starting with a high-impact use case (rather than generic experiments); ensuring data quality; and having cross-functional teams (IT working with business units) drive the project. These lessons inform the approach of any AI solution provider – it’s not just technology, but also enabling the client to adopt it properly. As a vendor of a Java AI solution, highlighting these success stories helps convince potential enterprise customers that <em>“others in your industry have done this and got results – we can help you do the same.”</em> It also helps in identifying which use cases are ripe for quick wins (e.g. if a prospect is a bank, pointing to fraud detection and customer chatbot successes; if manufacturing, pointing to predictive maintenance and QA examples).</p>
<h2 id="heading-future-outlook-and-opportunities"><strong>Future Outlook and Opportunities</strong></h2>
<p>The potential for AI in enterprise is <strong>still expanding</strong> rapidly. By focusing on Java-based AI integration, we tap into a future where AI becomes ubiquitous across business functions – and Java acts as the reliable conduit for that ubiquity. Here are some forward-looking points and opportunities:</p>
<ul>
<li><p><strong>AI Becoming Standard in Software:</strong> We are nearing a point where most enterprise software will have AI features baked in. Over <strong>60% of enterprise SaaS products</strong> already have embedded AI capabilities as of 2025 , and that proportion will grow. This means enterprises will expect any new solution to be “AI-enabled” or at least AI-ready. Offering a Java platform that can “AI-ify” their existing systems is a huge opportunity. Just as databases became a standard component of software in the 2000s, AI services (for prediction, classification, generation) could become a standard component in the 2020s. A Java AI business can position itself as the provider of that standard layer for companies that have a lot of Java systems.</p>
</li>
<li><p><strong>Generative AI and Multi-Modal Fusion:</strong> The rise of large language models and generative AI opens new use cases that barely existed a couple years ago – from AI code assistants to creative content generation to conversational analytics. Enterprises are exploring “copilots” in every department (marketing, sales, finance, HR, engineering) , essentially AI assistants tailored to specific jobs. For example, an HR copilot might help draft job descriptions and screen resumes; a finance copilot might auto-generate budget reports and answer queries about financial data. These copilots need to interface with internal data securely – again a strength of an on-prem or tightly integrated Java solution. Moreover, the future is <strong>multi-modal AI</strong>: combining text, vision, audio, and structured data. Many enterprise tasks involve multiple data types (e.g. a maintenance AI might take text logs, sensor readings, and images of equipment). The potential is in <strong>fusing these modalities</strong> to provide deeper insights. A Java-based platform that can orchestrate various AI models (an image model, a text model, etc.) into one workflow would be ahead of the curve. We already see early adoption of this in, say, insurance: when processing a car accident claim, AI can analyze images of the damage, read the text description, and cross-check with historical claim data – all to automate payout decisions. The tools to do this are emerging, and a cohesive solution in Java could find a strong market.</p>
</li>
<li><p><strong>Agentic AI and Automation of Processes:</strong> Looking a bit further, enterprises will likely move beyond single-task AI to <strong>AI agents</strong> that can handle sequences of tasks and autonomously interact with systems. For instance, instead of just answering a question, an AI agent might be told, “ensure our website is up to date with the latest product info,” and then it will gather info, log into the CMS, update pages, and so forth by calling APIs – under some human supervision. This kind of <strong>autonomous workflow execution</strong> (sometimes called agentic AI) is on the horizon. Enterprises running on Java backends will need their AI agents to talk to Java systems. Already, frameworks like LangChain allow AI to invoke tools; a Java equivalent (LangChain4j, etc.) is bringing that capability to the Java world . The potential here is automating a lot of routine “digital” tasks via AI. Companies that crack this (with guardrails) could see massive efficiency gains. It’s an area to watch and incorporate into the product roadmap.</p>
</li>
<li><p><strong>Increasing AI Adoption in Asia:</strong> The Asian market specifically shows signs of <strong>faster adoption and scaling</strong> in the next few years. APAC companies are not just piloting but seriously scaling AI – many expect substantial boosts in revenue and efficiency. In 2025, APAC firms that have scaled GenAI already reported <strong>25% shorter time-to-market</strong> for new products on average . Countries like Singapore, China, and India have national AI strategies pushing enterprise adoption. This suggests that any AI solution business should have a strategy for Asia – perhaps local partnerships or ensuring the solution supports local languages (for NLP) and local cloud infrastructure. Java’s wide use in Asia (especially India’s IT sector and many ASEAN enterprises) is a plus for market entry. The potential is a rapidly growing client base in Asia that could even leapfrog Western companies in AI integration, as they sometimes build new systems with AI at the core rather than as an add-on.</p>
</li>
<li><p><strong>Continuous Market Growth:</strong> All indicators show the enterprise AI market will continue its explosive growth. The global AI market size, estimated around <strong>$390+ billion in 2025</strong>, is projected to reach <strong>$1.8 trillion by 2030</strong> (nearly 35% CAGR) . This growth will be driven by both deeper penetration in existing use cases and expansion to new frontiers. For instance, small and mid-size enterprises (SMEs) are behind large firms in AI adoption, but as solutions become more turnkey, SMEs represent a large untapped segment. A Java-based solution could be packaged in a more accessible way for mid-size companies (who also often run on Java systems like ERP packages). Also, public sector and education could become big consumers of AI solutions (with government often preferring vendors that support on-prem and open standards – again, an edge for Java solutions).</p>
</li>
<li><p><strong>Competitive Landscape and Differentiation:</strong> While the opportunity is vast, competition is also growing. Tech giants offer AI cloud services (AWS, Azure, GCP) and specialized startups pop up weekly. However, there is a potential gap in <strong>enterprise-specific, integration-focused</strong> offerings. Many startups provide flashy AI demos but not the hard integration work. Enterprises often struggle to operationalize those. This is where a Java enterprise AI business can differentiate: by emphasizing <strong>integration, customization, and trust</strong>. Essentially, “we don’t just have a great model, we make it work in <em>your</em> environment safely and reliably.” Over 94% of executives are not completely satisfied with their current AI vendors  – often because of poor integration or support. That means a space for new players who get the enterprise mindset. The future likely will see consolidation where enterprises choose a few core AI platforms to standardize on (similar to how they chose databases or ERP systems). Positioning to become one of those standard platforms – especially leveraging Java’s huge installed base – is a huge potential win.</p>
</li>
</ul>
<p>In conclusion, the viability of a Java-centric AI solution for enterprises is strongly supported by current trends and future directions. <strong>Enterprises need AI, but they need it on their terms</strong> – integrated with existing systems, governed properly, and delivering clear business value. Java’s ecosystem, performance, and enterprise adoption make it an excellent foundation to meet these needs. The use cases in text and vision we discussed are just the beginning; as AI technology advances (with better models and techniques), a Java-based platform can continuously incorporate those advances and offer them in an enterprise-friendly package. The market is validating the concept with ever-increasing adoption rates and success stories. With thoughtful packaging, strong domain use cases, and attention to enterprise requirements, a Java AI business can position itself as a key enabler in the <strong>AI transformation of global and Asian enterprises</strong> – a transformation that is well underway and poised to accelerate through 2025 and beyond.</p>
<p><strong>Sources:</strong></p>
<ul>
<li><p>Elder Moraes, <em>10 Enterprise AI Use Cases Java Developers Are Ready To Build Today</em>  </p>
</li>
<li><p>IBM, <em>The 5 Biggest AI Adoption Challenges for 2025</em> (survey data on enterprise AI hurdles)  </p>
</li>
<li><p>A16Z, <em>How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025</em>  </p>
</li>
<li><p>BCG, <em>In the Race to Adopt AI, Asia-Pacific Is the Region to Watch</em>  </p>
</li>
<li><p>InfoQ Panel, <em>AI Integration for Java – to the Future, from the Past</em> (Java ecosystem tools)  </p>
</li>
<li><p>Hyperstack/NexGen, <em>Top 10 AI Use Cases in Enterprise 2025</em> (industry examples)  </p>
</li>
<li><p>Hyperstack/NexGen (Case studies: Fraud detection +50% accuracy ; Medical AI vs radiologists ; Supply chain stat )</p>
</li>
<li><p><a target="_blank" href="http://Writer.com">Writer.com</a>, <em>2025 Enterprise AI Adoption Report</em> (organizational aspects and ROI)  </p>
</li>
<li><p>Founders Forum, <em>AI Statistics 2024–2025</em> (global adoption and market size)  </p>
</li>
<li><p>Red Hat, <em>OpenShift AI Overview</em> (packaging AI for hybrid cloud)  .</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[9 Ways How AI can be Useful for Emails]]></title><description><![CDATA[Email is essential for modern communication, but it often takes up more time than it should. Crafting thoughtful replies, keeping the right tone, and managing dozens of threads can quickly become overwhelming. This is where AI can step in and make a ...]]></description><link>https://blog.donvitocodes.com/9-ways-how-ai-can-be-useful-for-emails</link><guid isPermaLink="true">https://blog.donvitocodes.com/9-ways-how-ai-can-be-useful-for-emails</guid><category><![CDATA[Productivity]]></category><category><![CDATA[chatgpt]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Tue, 26 Aug 2025 04:54:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756184041392/19ccd283-ed07-4eaf-a84e-e55368e78b32.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Email is essential for modern communication, but it often takes up more time than it should. Crafting thoughtful replies, keeping the right tone, and managing dozens of threads can quickly become overwhelming. This is where AI can step in and make a real difference.</p>
<p>AI can act as your writing partner for drafting and polishing replies so you spend less time typing and more time focusing on meaningful work.</p>
<p>Below, we’ll explore the key ways AI can help with replying to emails, with some examples.</p>
<h3 id="heading-1-drafting-replies-quickly"><strong>1. Drafting Replies Quickly</strong></h3>
<p>One of the most immediate benefits of AI is speed. You can feed it the main idea of what you want to say, and it can generate a professional draft almost instantly.</p>
<p><strong>Example</strong>:</p>
<ul>
<li><strong>Your input</strong>:</li>
</ul>
<pre><code class="lang-typescript">Tell client Sarah we got the report and will review by Friday.
</code></pre>
<ul>
<li><strong>AI’s draft</strong>:</li>
</ul>
<pre><code class="lang-typescript">“Hi Sarah,

Thank you <span class="hljs-keyword">for</span> sending over the report. Our team will review it and share our feedback <span class="hljs-keyword">with</span> you by Friday. 

Best regards, 

Melvin.”
</code></pre>
<h3 id="heading-2-adapting-tone-and-style"><strong>2. Adapting Tone and Style</strong></h3>
<p>Different situations require different tones. A reply to your boss might need to be formal, while a note to a close colleague can be casual. AI can instantly adapt.</p>
<p><strong>Example</strong>:</p>
<ul>
<li><strong>Formal</strong>:</li>
</ul>
<pre><code class="lang-typescript">“Dear Mr. Lee, I appreciate your update. I will review the materials and revert <span class="hljs-keyword">with</span> feedback by the end <span class="hljs-keyword">of</span> <span class="hljs-built_in">this</span> week.”
</code></pre>
<ul>
<li><strong>Friendly</strong>:</li>
</ul>
<pre><code class="lang-typescript">“Thanks <span class="hljs-keyword">for</span> the update, Lee! I’ll go through the docs and get back to you by Friday.”
</code></pre>
<p>You get flexibility without extra effort.</p>
<h3 id="heading-3-handling-different-types-of-emails"><strong>3. Handling Different Types of Emails</strong></h3>
<p>Not every email needs the same depth. AI can handle quick acknowledgments, detailed breakdowns, or polished high-stakes responses.</p>
<ul>
<li><strong>Quick acknowledgments</strong></li>
</ul>
<pre><code class="lang-typescript">“Got it, thanks <span class="hljs-keyword">for</span> the heads-up!”
</code></pre>
<ul>
<li><strong>Detailed explanations</strong></li>
</ul>
<pre><code class="lang-typescript">“Hi team, here’s a breakdown <span class="hljs-keyword">of</span> the next steps: (<span class="hljs-number">1</span>) finalize the draft, (<span class="hljs-number">2</span>) align <span class="hljs-keyword">with</span> marketing, and (<span class="hljs-number">3</span>) send <span class="hljs-keyword">for</span> client approval by Thursday.”
</code></pre>
<ul>
<li><strong>Polished responses for sensitive topics</strong>:</li>
</ul>
<pre><code class="lang-typescript">“I understand your concerns. Let’s schedule a call <span class="hljs-built_in">this</span> week to walk through the details together and ensure we’re fully aligned.”
</code></pre>
<p>AI can shift gears depending on the situation, something that usually takes us more mental energy than we realize.</p>
<h3 id="heading-4-saving-time-with-long-emails"><strong>4. Saving Time With Long Emails</strong></h3>
<p>We’ve all faced those long emails packed with multiple questions or concerns. Writing a structured reply can be exhausting. AI can break the problem down and generate organized responses.</p>
<p><strong>Example</strong>:</p>
<ul>
<li><strong>Input</strong>: A client sends a 10-paragraph email with five separate questions</li>
</ul>
<pre><code class="lang-typescript">A client sends a <span class="hljs-number">10</span>-paragraph email <span class="hljs-keyword">with</span> five separate questions.
</code></pre>
<ul>
<li><strong>AI output</strong>:</li>
</ul>
<pre><code class="lang-typescript">Opens <span class="hljs-keyword">with</span> appreciation: “Thank you <span class="hljs-keyword">for</span> the detailed update.”

Breaks down answers point by point <span class="hljs-keyword">in</span> bullet form.

Closes <span class="hljs-keyword">with</span>: “Please <span class="hljs-keyword">let</span> me know <span class="hljs-keyword">if</span> <span class="hljs-built_in">this</span> addresses everything, or <span class="hljs-keyword">if</span> we should set up a call to clarify further.”
</code></pre>
<p>This makes your replies not only faster but also clearer.</p>
<h3 id="heading-5-maintaining-consistency"><strong>5. Maintaining Consistency</strong></h3>
<p>For businesses, consistency in tone and phrasing matters. AI can be “trained” on preferred phrases, ensuring every reply reflects your voice or brand.</p>
<p><strong>Example</strong>:</p>
<ul>
<li>Instead of writing</li>
</ul>
<pre><code class="lang-typescript">“I’ll get back to you soon,”
</code></pre>
<ul>
<li>AI can consistently generate</li>
</ul>
<pre><code class="lang-typescript">“We’ll get back to you shortly”
</code></pre>
<p>if that’s your company’s standard wording.</p>
<p>Over time, this builds trust and professionalism.</p>
<h3 id="heading-6-prioritizing-responses"><strong>6. Prioritizing Responses</strong></h3>
<p>Inbox overload is common. AI can help you decide which emails need immediate attention and even propose quick replies.</p>
<p><strong>Example</strong>:</p>
<ul>
<li>Email:</li>
</ul>
<pre><code class="lang-typescript">“Can you confirm the delivery date today?”
</code></pre>
<ul>
<li>AI suggests:</li>
</ul>
<pre><code class="lang-typescript">“Hi John, yes—the delivery is scheduled <span class="hljs-keyword">for</span> September <span class="hljs-number">2.</span> Please <span class="hljs-keyword">let</span> me know <span class="hljs-keyword">if</span> you need <span class="hljs-built_in">any</span> changes.”
</code></pre>
<h3 id="heading-7-overcoming-writers-block"><strong>7. Overcoming Writer’s Block</strong></h3>
<p>Sometimes the hardest part of replying to an email is just getting started. AI can help you overcome that problem.</p>
<p><strong>Example</strong>:</p>
<ul>
<li><p>You’re asked to decline a partnership politely.</p>
</li>
<li><p>AI draft:</p>
</li>
</ul>
<pre><code class="lang-typescript">“Thank you <span class="hljs-keyword">for</span> reaching out and considering us <span class="hljs-keyword">for</span> <span class="hljs-built_in">this</span> collaboration. At <span class="hljs-built_in">this</span> time, we are not <span class="hljs-keyword">in</span> a position to move forward, but we appreciate your proposal and will keep your details on file <span class="hljs-keyword">for</span> future opportunities.”
</code></pre>
<p>This saves you from overthinking sensitive responses.</p>
<h3 id="heading-8-multilingual-replies"><strong>8. Multilingual Replies</strong></h3>
<p>For global teams and clients, AI can generate or translate replies into different languages—while keeping the professional tone intact.</p>
<p><strong>Example</strong>:</p>
<ul>
<li><strong>Input</strong>:</li>
</ul>
<pre><code class="lang-typescript">“Tell partner <span class="hljs-keyword">in</span> French we’ll share the final report tomorrow.”
</code></pre>
<ul>
<li><strong>AI output</strong>:</li>
</ul>
<pre><code class="lang-typescript">“Bonjour Marie, nous partagerons le rapport final demain. Merci pour votre patience.”
</code></pre>
<p>This makes international communication smoother and faster.</p>
<h3 id="heading-9-customizing-length-and-detail"><strong>9. Customizing Length and Detail</strong></h3>
<p>Sometimes you want short replies, other times a more detailed response. AI lets you control the length and depth.</p>
<p><strong>Example</strong>:</p>
<ul>
<li><strong>Brief</strong>:</li>
</ul>
<pre><code class="lang-typescript">“Thanks <span class="hljs-keyword">for</span> the update, Friday works <span class="hljs-keyword">for</span> me.”
</code></pre>
<ul>
<li><strong>Detailed</strong>:</li>
</ul>
<pre><code class="lang-typescript">“Hi James, thanks <span class="hljs-keyword">for</span> the update. Friday works well, and I’ll prepare the agenda <span class="hljs-keyword">in</span> advance so we can focus on the <span class="hljs-keyword">new</span> product features during our discussion.”
</code></pre>
<hr />
<p>AI won’t replace your judgment or decision-making, it complements it. Think of it as a reliable writing partner that:</p>
<ul>
<li><p>Turns your shorthand into polished messages</p>
</li>
<li><p>Adapts tone to fit the recipient</p>
</li>
<li><p>Handles complex replies with structure</p>
</li>
<li><p>Saves time and reduces inbox stress</p>
</li>
</ul>
<p>I hope this article helped you gave you some understand how you can use AI for emails.</p>
<p><strong>How do you use AI for emails? Feel free to comment below</strong></p>
<p><strong>You can also join my Practical with AI newsletter for more AI</strong> <a target="_blank" href="https://practicalwithai.substack.com/"><strong>https://practicalwithai.substack.com/</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Introducing OpenAI o3-mini: A New Frontier in Cost-Effective STEM Reasoning]]></title><description><![CDATA[OpenAI unveiled its latest innovation: OpenAI o3-mini. This new model is a major leap forward in efficient, high-quality reasoning—especially in STEM fields such as science, mathematics, and coding. Today, we explore what makes o3-mini so exciting, i...]]></description><link>https://blog.donvitocodes.com/introducing-openai-o3-mini-a-new-frontier-in-cost-effective-stem-reasoning</link><guid isPermaLink="true">https://blog.donvitocodes.com/introducing-openai-o3-mini-a-new-frontier-in-cost-effective-stem-reasoning</guid><category><![CDATA[chatgpt]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Sat, 01 Feb 2025 05:55:12 GMT</pubDate><content:encoded><![CDATA[<p>OpenAI unveiled its latest innovation: <strong>OpenAI o3-mini</strong>. This new model is a major leap forward in efficient, high-quality reasoning—especially in STEM fields such as science, mathematics, and coding. Today, we explore what makes o3-mini so exciting, its standout performance metrics, and how it is shaping the future of AI-driven reasoning.</p>
<p><strong>A Fresh Approach to Reasoning Models</strong></p>
<p>OpenAI o3-mini is described as “pushing the frontier of cost-effective reasoning” by combining speed with exceptional performance in specialized domains. Unlike previous models, it has been optimized to deliver fast, accurate responses while keeping operational costs low. This makes it particularly attractive for applications where precision in STEM-related tasks is a priority .</p>
<p><strong>Key Innovations</strong></p>
<p>• <strong>Cost Efficiency &amp; Speed:</strong> With a significant reduction in per-token pricing (a 95% reduction since GPT-4’s launch), o3-mini provides not only rapid responses but also lower latency—averaging 2.5 seconds faster time to first token compared to its predecessor .</p>
<p>• <strong>Optimized STEM Reasoning:</strong> Designed specifically to tackle challenges in math, science, and coding, o3-mini demonstrates advanced reasoning capabilities, making it ideal for both academic and industrial applications.</p>
<p>• <strong>Flexible Reasoning Effort:</strong> Developers can choose between three reasoning effort options—low, medium, and high—to best suit the complexity of their use case. This flexibility ensures that o3-mini “thinks harder” when necessary or prioritizes speed when required .</p>
<p><strong>Performance That Speaks for Itself</strong></p>
<p>One of the most compelling aspects of OpenAI o3-mini is its rigorous performance on various benchmarks:</p>
<p><strong>Competition Math (AIME 2024)</strong></p>
<p>• <strong>Mathematics:</strong> With medium reasoning effort, o3-mini matches the performance of previous models on challenging math questions. When pushed to high effort, it outperforms both OpenAI o1-mini and even its broader general knowledge counterpart .</p>
<p><strong>PhD-level Science Questions (GPQA Diamond)</strong></p>
<p>• <strong>Science Excellence:</strong> In high-difficulty science challenges, o3-mini proves its mettle by achieving comparable performance to the more extensive OpenAI o1 model when using high reasoning effort. This ensures that users looking for precision in academic or research settings can rely on its output.</p>
<p><strong>Code and Software Engineering</strong></p>
<p>• <strong>Competition Coding:</strong> On platforms like Codeforces, o3-mini exhibits significant improvement in coding tasks with an Elo rating of 2073 at high reasoning effort.</p>
<p>• <strong>Software Engineering Benchmarks:</strong> In tests like SWE-bench Verified, o3-mini delivers the highest accuracy among its peers, making it an attractive option for developers involved in production-level coding tasks .</p>
<p><strong>Additional Metrics</strong></p>
<p>• <strong>LiveBench Coding:</strong> Further tests confirm that even at medium reasoning effort, o3-mini can outperform previous models in speed and overall efficiency.</p>
<p>• <strong>General Knowledge:</strong> Its comprehensive training allows it to excel not only in technical subjects but also in broader general knowledge areas, ensuring a well-rounded performance across diverse queries.</p>
<p><strong>Empowering Developers with New Features</strong></p>
<p>For the developer community, o3-mini brings several highly anticipated features:</p>
<p>• <strong>Function Calling and Structured Outputs:</strong> These capabilities allow for seamless integration into production workflows, giving developers the tools needed for more dynamic and structured interactions .</p>
<p>• <strong>Developer Messages and Streaming:</strong> Out-of-the-box support for streaming ensures that responses are delivered quickly, which is crucial for real-time applications.</p>
<p>• <strong>Upgraded API Tiers:</strong> With improvements in rate limits—tripling the messages per day for ChatGPT Plus, Team, and Pro users—OpenAI is making it easier for a wide range of users to access and benefit from its advanced reasoning capabilities.</p>
<p><strong>Safety and Responsible Deployment</strong></p>
<p>OpenAI has placed a strong emphasis on safety with o3-mini. By employing techniques such as <strong>deliberative alignment</strong>, the model is trained to adhere to human-written safety guidelines before responding to prompts. Extensive evaluations—including disallowed content and jailbreak tests—ensure that o3-mini not only performs well but also responds in a safe and controlled manner .</p>
<p><strong>Looking Ahead</strong></p>
<p>The release of OpenAI o3-mini is more than just an upgrade—it’s a demonstration of how targeted innovations in AI can transform the landscape of STEM problem-solving. With its balance of speed, precision, and cost-efficiency, o3-mini is set to become a vital tool for educators, developers, and researchers alike.</p>
<p><strong>In summary</strong>, OpenAI o3-mini represents a significant advancement in the realm of AI reasoning models. Its performance across mathematical, scientific, and coding evaluations confirms that small models can indeed achieve high levels of intelligence without compromising on efficiency or safety. As OpenAI continues to refine and expand its models, we can expect even more breakthroughs that will further democratize access to high-quality AI.</p>
<p><em>For further details on OpenAI o3-mini, including technical benchmarks and full evaluation metrics, visit the</em> <a target="_blank" href="https://openai.com/index/openai-o3-mini/"><em>official OpenAI page</em></a> <em>.</em></p>
]]></content:encoded></item><item><title><![CDATA[My Top AI Coding Tools]]></title><description><![CDATA[AI tools are revolutionizing software development by offering a range of capabilities, from code completion and generation to UI design and code review, with notable tools like GitHub Copilot, Cursor AI, and Vercel v0 providing unique features such a...]]></description><link>https://blog.donvitocodes.com/my-top-ai-coding-tools</link><guid isPermaLink="true">https://blog.donvitocodes.com/my-top-ai-coding-tools</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[coding]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Mon, 28 Oct 2024 09:43:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730109015761/6e89278f-2a46-4264-9127-572e7c3946d8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AI tools are revolutionizing software development by offering a range of capabilities, from code completion and generation to UI design and code review, with notable tools like GitHub Copilot, Cursor AI, and Vercel v0 providing unique features such as real-time suggestions and AI-powered UI generation. Meanwhile, platforms like Codeium, <a target="_blank" href="http://Continue.dev">Continue.dev</a>, and <a target="_blank" href="http://Bolt.new">Bolt.new</a> focus on privacy, open-source integration, and web-based coding assistance, catering to diverse developer needs and enhancing productivity across various stages of the development process.</p>
<h3 id="heading-ai-coding-copilots">AI Coding Copilots</h3>
<p>AI coding copilots have revolutionized software development, offering a range of features to enhance productivity and code quality. Here's a more detailed look at some of the leading AI-powered coding assistants:</p>
<ul>
<li><p><a target="_blank" href="http://cursor.com">Cursor AI</a></p>
<ul>
<li><p>Built on Visual Studio Code for a familiar interface</p>
</li>
<li><p>Offers deep AI integration for code generation and refactoring</p>
</li>
<li><p>Analyzes entire codebases for context-aware suggestions</p>
</li>
<li><p>Provides in-line documentation and explanations</p>
</li>
<li><p>Supports natural language prompts for code generation</p>
</li>
</ul>
</li>
<li><p><a target="_blank" href="https://github.com/features/copilot">GitHub Copilot</a></p>
<ul>
<li><p>Integrates with various IDEs including VS Code, JetBrains, and Vim/Neovim</p>
</li>
<li><p>Provides real-time code suggestions based on context</p>
</li>
<li><p>Excels in generating boilerplate code and completing repetitive tasks</p>
</li>
<li><p>Trained on a vast repository of open-source code</p>
</li>
<li><p>Offers a chat interface for more complex coding queries</p>
</li>
</ul>
</li>
<li><p><a target="_blank" href="https://v0.dev/">Vercel v0</a></p>
<ul>
<li><p>Specializes in UI generation, transforming text descriptions into functional React components</p>
</li>
<li><p>Provides real-time preview of generated UI components</p>
</li>
<li><p>Supports integration with popular frontend frameworks and libraries</p>
</li>
<li><p>Offers accessibility considerations in generated components</p>
</li>
<li><p>Allows for easy deployment and integration with existing projects</p>
</li>
</ul>
</li>
<li><p><a target="_blank" href="https://codeium.com/">Codeium</a></p>
<ul>
<li><p>Supports 70+ programming languages</p>
</li>
<li><p>Offers in-line Fill-in-the-Middle technology for improved accuracy</p>
</li>
<li><p>Emphasizes privacy and security with options for on-premise deployment</p>
</li>
<li><p>Provides natural language processing for code generation</p>
</li>
<li><p>Integrates with popular IDEs and code editors</p>
</li>
</ul>
</li>
<li><p><a target="_blank" href="http://Continue.dev">Continue.dev</a></p>
<ul>
<li><p>Open-source AI coding assistant</p>
</li>
<li><p>Supports multiple AI models, including local LLMs</p>
</li>
<li><p>Integrates with existing IDEs for seamless workflow</p>
</li>
<li><p>Allows querying of API endpoints for flexible AI assistance</p>
</li>
<li><p>Focuses on customizability and developer control</p>
</li>
</ul>
</li>
<li><p><a target="_blank" href="https://supermaven.com/">Supermaven</a></p>
<ul>
<li><p>Specializes in AI-assisted code review and bug detection</p>
</li>
<li><p>Helps improve code quality by catching potential issues early</p>
</li>
<li><p>Offers suggestions for code optimization and best practices</p>
</li>
<li><p>Integrates with version control systems for streamlined workflows</p>
</li>
<li><p>Provides detailed explanations for detected issues and suggested fixes</p>
</li>
</ul>
</li>
<li><p><a target="_blank" href="http://bolt.new">Bolt.new</a></p>
<ul>
<li><p>Web-based AI coding assistant for rapid prototyping</p>
</li>
<li><p>Supports natural language prompting for code generation</p>
</li>
<li><p>Offers multi-framework support for various web technologies</p>
</li>
<li><p>Provides integrated deployment options</p>
</li>
<li><p>Features an AI-powered chat interface for coding assistance</p>
</li>
</ul>
</li>
</ul>
<p>These AI copilots cater to different aspects of the development workflow, from code generation and completion to UI design and code review, offering developers a comprehensive toolkit to streamline their coding process and enhance productivity.</p>
<h2 id="heading-my-ai-development-stack">My AI Development Stack</h2>
<p>I leverage a powerful combination of AI-powered development tools to streamline my coding workflow. <a target="_blank" href="http://cursor.com"><strong>Cursor</strong></a> serves as my primary IDE, offering intelligent code suggestions and comprehensive codebase analysis while maintaining a familiar VS Code-like environment.</p>
<p>For rapid prototyping and web development, I use <a target="_blank" href="https://bolt.new/"><strong>Bolt.new</strong></a>'s web-based interface, which excels at quickly turning ideas into functional applications with its natural language prompting and seamless deployment capabilities.</p>
<p><a target="_blank" href="http://v0.dev"><strong>Vercel v0</strong></a> handles my UI component generation needs, transforming text descriptions into polished React components with built-in accessibility features.</p>
<p>Complementing these, <a target="_blank" href="https://codeium.com/"><strong>Codeium</strong></a> provides robust code completion across multiple languages, with its Fill-in-the-Middle technology offering accurate suggestions. This combination creates a powerful ecosystem that enhances productivity across different aspects of the development lifecycle, from initial prototyping to polished implementation.</p>
<p>I also regularly chat with <a target="_blank" href="https://claude.ai/"><strong>Claude AI</strong></a> and <a target="_blank" href="https://chatgpt.com/"><strong>ChatGPT</strong></a> when I need help solving tricky coding problems or want to understand complex concepts better. While I need to copy-paste code from these chat AIs, their help with debugging and explaining things clearly makes them essential to my workflow.</p>
<h2 id="heading-cursor-ai-for-interative-development">Cursor AI for Interative Development</h2>
<p>Cursor AI, available at cursor.com, is a cutting-edge AI-powered code editor that has gained significant traction among developers for its innovative features and seamless integration of artificial intelligence into the coding process.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730110429390/991f4ad5-466f-40ba-9f29-b34eab1320b5.png" alt class="image--center mx-auto" /></p>
<p>Built on top of Visual Studio Code, Cursor AI combines the familiarity of a traditional IDE with advanced AI capabilities, offering a unique coding experience. The editor provides context-aware code suggestions and completions, significantly enhancing developer productivity. It can generate entire functions, classes, and code blocks based on natural language prompts, allowing developers to express their ideas in plain English and have them translated into functional code.</p>
<p>One of Cursor AI's standout features is its ability to understand and analyze the entire codebase, providing suggestions and insights that are relevant to the project as a whole, not just the current file. This comprehensive understanding enables more accurate and contextually appropriate code generation and refactoring suggestions.</p>
<p>Cursor AI also excels in code documentation and explanation. It can automatically generate meaningful comments and documentation for existing code, saving developers significant time and improving code maintainability. Additionally, it offers an integrated AI chat feature that allows developers to ask questions about their code or request explanations for complex functions without leaving the editor.</p>
<p>The platform prioritizes privacy and security, offering a privacy mode where none of the user's code is stored by the company. This feature, along with its SOC 2 certification, makes Cursor AI a viable option for developers working on sensitive projects or within organizations with strict data protection requirements</p>
<p>Cursor AI offers a free 2-week trial with full-features. Paid plans starting at $20 per month for individuals and $40 per month for businesses. The business plan includes additional features such as organization-wide enforced privacy mode and an admin dashboard for project management You can also use your own OpenAI or Anthropic Claude API key with Cursor.</p>
<h2 id="heading-boltnew-a-great-starting-point">Bolt.new - a great starting point!</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730110014928/e9189a1e-7e7b-4137-8add-8de6484b062c.png" alt class="image--center mx-auto" /></p>
<p><a target="_blank" href="http://bolt.new">Bolt.new</a> is a powerful web-based AI coding assistant that stands out among other AI tools for software development. Unlike traditional IDEs or code editors, Bolt.new operates entirely in the browser, offering a unique and accessible approach to AI-assisted coding1.Key features of Bolt.new include:</p>
<ol>
<li><p>Rapid prototyping: Bolt.new excels at quickly turning ideas into functional web applications, allowing developers to create and deploy projects in minutes.</p>
</li>
<li><p>Natural language prompting: Users can describe their desired application in plain English, and Bolt.new will generate the corresponding code and project structure1.</p>
</li>
<li><p>Multi-framework support: Bolt.new can work with various web frameworks, including Next.js, React, and others, adapting to different project requirements.</p>
</li>
<li><p>Integrated deployment: The platform offers one-click deployment to Netlify, streamlining the process of making applications live on the internet.</p>
</li>
<li><p>AI-powered chat: With the recent addition of "Ask Bolt," developers can now interact with AI directly within the editor, enhancing the coding experience.</p>
</li>
<li><p>Code generation and completion: Bolt.new provides context-aware code suggestions and can generate entire functions or components based on user prompts.</p>
</li>
<li><p>Beginner-friendly interface: The platform's UI is designed to be accessible to newcomers, resembling familiar chat interfaces like ChatGPT or Claude.</p>
</li>
</ol>
<p>Bolt.new's web-based nature sets it apart from standalone editors like Cursor AI or IDE plugins like GitHub Copilot. It offers a more integrated experience, handling everything from initial code generation to deployment within a single platform. This makes it particularly useful for rapid prototyping and idea validation, allowing developers to quickly bring concepts to life without the need for extensive setup or local development environments.</p>
<h2 id="heading-vercel-v0-generate-ui-components">Vercel v0 - Generate UI Components</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730109975923/6d8c32fb-2af2-4e5c-a66e-ed839f384f42.png" alt class="image--center mx-auto" /></p>
<p>Vercel v0 is a groundbreaking AI-powered UI generation tool that transforms natural language descriptions into functional React code. Launched by Vercel, v0 streamlines the process of creating user interfaces by leveraging generative AI technology to produce high-quality, customizable components based on shadcn/ui.Key features of Vercel v0 include:</p>
<ol>
<li><p>Text-to-UI conversion: Users can describe their desired interface in plain English, and v0 generates corresponding React components.</p>
</li>
<li><p>Real-time preview: As code is generated, v0 provides an instant visual representation of the UI, allowing for immediate feedback and iteration4.</p>
</li>
<li><p>Code customization: Developers can refine and modify the generated code directly within the v0 interface.</p>
</li>
<li><p>Integration with popular frameworks: v0 produces code compatible with React and Next.js, making it easy to incorporate into existing projects.</p>
</li>
<li><p>Support for third-party libraries: The tool can integrate components from various libraries, including Material UI and react-three-fiber for 3D graphics.</p>
</li>
<li><p>Accessibility considerations: v0 generates components with built-in accessibility features, adhering to WAI-ARIA guidelines.</p>
</li>
<li><p>Deployment options: Users can copy the generated code or install it directly into their codebase using the shadcn CLI.</p>
</li>
</ol>
<p>v0 excels in rapid prototyping and UI experimentation, allowing developers and designers to quickly iterate on ideas without writing extensive code from scratch. It's particularly useful for creating landing pages, pricing tables, and other common web components.While v0 is primarily focused on frontend development, it also offers some backend integration capabilities, such as fetching data from external sources.</p>
<p>The tool is continually evolving, with plans to add support for custom design systems, theming, and image-to-code transformations.Vercel v0 is available in both free and paid tiers, with subscription plans offering increased generation credits and additional features. As the tool transitions from Alpha to Beta, it's gaining popularity among developers for its ability to accelerate the UI design and development process</p>
<h2 id="heading-codeium-ai-code-completion">Codeium AI Code Completion</h2>
<p>Codeium is a powerful AI-powered code completion tool that offers functionality similar to GitHub Copilot, but with some unique features and advantages. It provides context-aware suggestions and multi-line code generation across 70+ programming languages. Codeium's autocomplete feature is designed to be faster than thought, producing high-quality suggestions with incredibly low latencies.Key features of Codeium include:</p>
<ul>
<li><p>In-line Fill-in-the-Middle (FIM) technology, which improves the accuracy of code completion by considering surrounding context.</p>
</li>
<li><p>Integration with popular IDEs and code editors such as Visual Studio Code, IntelliJ, Sublime Text, and more.</p>
</li>
<li><p>A free version for individual developers, with paid plans starting at $12 per user per month for additional features.</p>
</li>
<li><p>Privacy-focused approach, analyzing only necessary code and offering options for on-premise deployment.</p>
</li>
<li><p>Natural language processing capabilities, allowing developers to turn plain text prompts into code<a target="_blank" href="https://snappify.com/blog/ai-tools-for-developers">4</a>.</p>
</li>
</ul>
<p>Unlike GitHub Copilot, Codeium emphasizes its commitment to user privacy and offers flexible deployment options, making it an attractive alternative for developers and organizations with specific security requirements.</p>
<h2 id="heading-using-claude-and-chatgpt-for-coding">Using Claude and ChatGPT for Coding</h2>
<p>Claude AI and ChatGPT are powerful AI chat tools that can assist with coding tasks, but they require manual code transfer. Unlike integrated development environments (IDEs) with AI plugins, these models operate through chat interfaces, necessitating a copy-paste workflow for code implementation. When using Claude or ChatGPT for coding:</p>
<ul>
<li><p>Describe your coding task or problem in natural language, providing context and specific requirements.</p>
</li>
<li><p>The AI will generate code snippets or explanations based on your prompt.</p>
</li>
<li><p>Manually copy the generated code from the chat interface.</p>
</li>
<li><p>Paste the code into your preferred IDE or code editor for testing and integration.</p>
</li>
<li><p>Iterate by asking follow-up questions or requesting modifications to refine the code.</p>
</li>
</ul>
<p>This process allows for flexible problem-solving and learning, as you can engage in a dialogue about the code, ask for explanations, or request optimizations. However, it requires more manual effort compared to integrated AI coding assistants. Users must carefully review and test the pasted code, as these models may occasionally produce errors or outdated syntax.</p>
<p>Despite this limitation, ChatGPT and Claude Sonnet offer valuable coding assistance, especially for conceptual understanding, algorithm design, and troubleshooting complex issues. I rely on this whenever I wanted to understand concepts and generate sample code for m problem.</p>
<h2 id="heading-claudes-artifacts-feature">Claude's Artifacts Feature</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730110171575/f77792ae-2032-4878-8cfc-cc215dbe9e2c.png" alt class="image--center mx-auto" /></p>
<p>Claude AI's Artifacts feature represents a significant advancement in AI-assisted coding and data visualization. Artifacts allow Claude to generate and display substantial, standalone content in a dedicated window separate from the main conversation. This feature is particularly useful for coding tasks, as it enables real-time visualization and iteration of code snippets, documents, and even interactive components.</p>
<p>For coding, Artifacts can display various types of content:</p>
<ul>
<li><p>Code snippets in multiple programming languages</p>
</li>
<li><p>HTML webpages with integrated CSS and JavaScript</p>
</li>
<li><p>SVG (Scalable Vector Graphics) images</p>
</li>
<li><p>Mermaid diagrams for visualizing workflows and processes</p>
</li>
<li><p>React components that run directly in the browser</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730110198948/7c360d9b-033e-4e80-bff6-0df71a328e83.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p>Claude's ability to generate and manipulate charts and graphs within Artifacts is especially noteworthy. Users can request data visualizations, and Claude will create interactive charts using libraries like Recharts. This capability extends to creating infographics and other visual representations of data, making it a powerful tool for data analysis and presentation.</p>
<p>The Artifacts feature also supports version control, allowing users to switch between different iterations of the generated content. This functionality, combined with Claude's ability to understand and analyze entire codebases, makes it a versatile assistant for complex coding tasks and data-driven projects.</p>
<h2 id="heading-the-future-of-development">The Future of Development</h2>
<p>Today's AI tools have transformed how I write code. My daily workflow combines specialized coding assistants that help me write, analyze, and improve code with great efficiency. From intelligent code suggestions in my editor to rapid prototyping and UI generation, these tools handle the heavy lifting of routine tasks. While dedicated development tools manage most of my coding needs, I still regularly turn to AI language models for problem-solving, debugging, and understanding complex concepts. This combination of specialized coding tools and AI assistants has significantly improved my development speed and code quality. The future of coding is here, and it's powered by AI - not to replace developers, but to enhance our capabilities and let us focus on what matters most: solving real problems and building great software.</p>
]]></content:encoded></item><item><title><![CDATA[Vision Agent: Transforming Business Through AI-Powered Computer Vision]]></title><description><![CDATA[Landing AI has launched Vision Agent, a powerful new tool that makes it simpler to create computer vision applications using Python. The company was founded by AI expert Andrew Ng, who also co-founded Google Brain, Coursera, and DeepLearning.AI, and ...]]></description><link>https://blog.donvitocodes.com/vision-agent-transforming-business-through-ai-powered-computer-vision</link><guid isPermaLink="true">https://blog.donvitocodes.com/vision-agent-transforming-business-through-ai-powered-computer-vision</guid><category><![CDATA[visionai]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Tue, 22 Oct 2024 07:44:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729582915356/8e7f4f63-a7c8-4f21-b792-b330815b64bb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Landing AI has launched Vision Agent, a powerful new tool that makes it simpler to create computer vision applications using Python. The company was founded by AI expert Andrew Ng, who also co-founded Google Brain, Coursera, and <a target="_blank" href="http://DeepLearning.AI">DeepLearning.AI</a>, and continues to make AI technology more accessible to everyone.</p>
<h2 id="heading-what-is-vision-agent">What is Vision Agent?</h2>
<p>Vision Agent is a development tool that helps programmers build computer vision applications quickly and easily. It stands out for its ability to detect and count objects, and automatically label images—features that are especially useful for automating visual tasks.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/4HqjROcORuQ">https://youtu.be/4HqjROcORuQ</a></div>
<p> </p>
<h2 id="heading-main-features">Main Features</h2>
<h3 id="heading-simplified-development-process">Simplified Development Process</h3>
<ul>
<li><p><strong>Automated Workflows</strong>: Handles the entire process of building and launching vision applications</p>
</li>
<li><p><strong>Smart Model Selection</strong>: Helps you choose the right AI model for your needs</p>
</li>
<li><p><strong>Code Creation</strong>: Automatically writes Python code for your applications</p>
</li>
<li><p><strong>Performance Tools</strong>: Makes sure your application runs efficiently</p>
</li>
</ul>
<h3 id="heading-smart-tools-and-automation">Smart Tools and Automation</h3>
<ul>
<li><p><strong>Quick Testing</strong>: Build and test vision solutions faster</p>
</li>
<li><p><strong>AI-Powered Labeling</strong>: Makes data labeling faster and easier</p>
</li>
<li><p><strong>Automatic Processing</strong>: Creates code instantly for handling results</p>
</li>
<li><p>Object tracking and detection</p>
</li>
<li><p>Counting systems</p>
</li>
<li><p>Image labeling</p>
</li>
<li><p>Works with existing AI systems</p>
</li>
</ul>
<h2 id="heading-vision-ai-technologys-potential-use-cases-for-business">Vision AI Technology's Potential Use Cases for Business</h2>
<p>Vision Agent and similar AI technologies are transforming business operations across multiple sectors, offering powerful solutions to common challenges:</p>
<h3 id="heading-manufacturing-amp-quality-control">Manufacturing &amp; Quality Control</h3>
<p>Vision AI enhances production efficiency through:</p>
<ul>
<li><p>Real-time defect detection during production</p>
</li>
<li><p>Automated quality inspection systems</p>
</li>
<li><p>Assembly line monitoring and verification</p>
</li>
<li><p>Reduction in inspection costs and human error</p>
</li>
<li><p>Predictive maintenance through visual monitoring</p>
</li>
</ul>
<h3 id="heading-healthcare-amp-medical-services">Healthcare &amp; Medical Services</h3>
<p>Medical facilities can improve patient care with:</p>
<ul>
<li><p>Enhanced analysis of X-rays, MRIs, and CT scans</p>
</li>
<li><p>Early disease detection and diagnosis</p>
</li>
<li><p>Automated patient screening processes</p>
</li>
<li><p>Medical record digitization and management</p>
</li>
<li><p>Operating room workflow optimization</p>
</li>
</ul>
<h3 id="heading-retail-amp-e-commerce">Retail &amp; E-commerce</h3>
<p>Businesses can optimize operations through:</p>
<ul>
<li><p>Automated inventory tracking and management</p>
</li>
<li><p>Real-time shelf monitoring and restocking alerts</p>
</li>
<li><p>Customer behavior analysis and heat mapping</p>
</li>
<li><p>Self-checkout systems and theft prevention</p>
</li>
<li><p>Visual search capabilities for online shopping</p>
</li>
</ul>
<h3 id="heading-agriculture-amp-farming">Agriculture &amp; Farming</h3>
<p>Modern farming operations benefit from:</p>
<ul>
<li><p>Drone-based crop monitoring and analysis</p>
</li>
<li><p>Early detection of plant diseases and pests</p>
</li>
<li><p>Automated yield estimation and forecasting</p>
</li>
<li><p>Precision farming and resource optimization</p>
</li>
<li><p>Harvest timing optimization</p>
</li>
</ul>
<h3 id="heading-automotive-amp-transportation">Automotive &amp; Transportation</h3>
<p>The transport sector can improve safety and efficiency with:</p>
<ul>
<li><p>Advanced driver assistance systems (ADAS)</p>
</li>
<li><p>Autonomous navigation capabilities</p>
</li>
<li><p>Traffic monitoring and analysis</p>
</li>
<li><p>Vehicle damage inspection automation</p>
</li>
<li><p>Parking space management systems</p>
</li>
</ul>
<h3 id="heading-logistics-amp-warehouse-management">Logistics &amp; Warehouse Management</h3>
<p>Distribution centers can streamline operations through:</p>
<ul>
<li><p>Automated package sorting and routing</p>
</li>
<li><p>Inventory tracking and optimization</p>
</li>
<li><p>Storage space utilization analysis</p>
</li>
<li><p>Loading/unloading automation</p>
</li>
<li><p>Quality control during packaging</p>
</li>
</ul>
<h3 id="heading-construction-amp-real-estate">Construction &amp; Real Estate</h3>
<p>The building sector can enhance projects using:</p>
<ul>
<li><p>Site safety monitoring and compliance</p>
</li>
<li><p>Progress tracking and documentation</p>
</li>
<li><p>Equipment utilization monitoring</p>
</li>
<li><p>Structural inspection automation</p>
</li>
<li><p>Virtual property tours and inspectionsBenefits for Businesses</p>
</li>
</ul>
<p>Using Vision Agent brings several key advantages:</p>
<ol>
<li><p>Less manual work needed for visual inspections</p>
</li>
<li><p>More accurate quality control</p>
</li>
<li><p>Lower operating costs</p>
</li>
<li><p>Better decision-making based on data</p>
</li>
<li><p>Faster processing of visual information</p>
</li>
</ol>
<h2 id="heading-benefits-for-developers">Benefits for Developers</h2>
<p>Vision Agent makes developers' work easier through:</p>
<ul>
<li><p><strong>Faster Development</strong>: Build and test applications quickly</p>
</li>
<li><p><strong>Less Coding Required</strong>: Automatic code generation saves time</p>
</li>
<li><p><strong>Smoother Workflows</strong>: Automated processes reduce repetitive work</p>
</li>
<li><p><strong>Better Data Handling</strong>: Makes labeling and processing data easier</p>
</li>
<li><p><strong>Smart Resource Use</strong>: Picks the best tools for each task</p>
</li>
</ul>
<h2 id="heading-whats-next">What's Next</h2>
<p>As tools like Vision Agent continue to improve, we expect to see:</p>
<ul>
<li><p>More people using computer vision technology</p>
</li>
<li><p>More industries adopting these tools</p>
</li>
<li><p>New ways to use the technology</p>
</li>
<li><p>Better integration with current systems</p>
</li>
<li><p>Continued advances in AI capabilities</p>
</li>
</ul>
<h2 id="heading-the-future-of-vision-ai">The Future of Vision AI</h2>
<p>Vision Agent is an important step forward in making computer vision technology easier to use for both developers and businesses. As part of Landing AI's toolkit, it shows how committed the company is to making AI technology available to more people. With its practical features and uses across many industries, Vision Agent is set to play a key role in the future of automated visual processing.</p>
<p>The tool arrives at a time when businesses are looking for better ways to automate their visual inspection and analysis processes. By making complex computer vision technology easier to use, Vision Agent helps more businesses take advantage of advanced AI capabilities.</p>
]]></content:encoded></item><item><title><![CDATA[Selecting the Right AI Tool for your Business]]></title><description><![CDATA[In today's rapidly evolving technological landscape, choosing the right AI tool has become a critical decision for organizations of all sizes. This framework provides a structured approach to evaluating, implementing, and optimizing AI tools for your...]]></description><link>https://blog.donvitocodes.com/selecting-the-right-ai-tool-a-decision-framework</link><guid isPermaLink="true">https://blog.donvitocodes.com/selecting-the-right-ai-tool-a-decision-framework</guid><category><![CDATA[#AIforBusiness]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Tue, 22 Oct 2024 06:38:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729579304470/06c3d507-8bd7-4b92-b821-0a1746832c34.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today's rapidly evolving technological landscape, choosing the right AI tool has become a critical decision for organizations of all sizes. This framework provides a structured approach to evaluating, implementing, and optimizing AI tools for your specific needs.  </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729578759368/2293ca9e-5829-4df7-bb48-350eafbdab51.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-1-evaluation-start-with-hands-on-experience">1. Evaluation: Start with Hands-On Experience</h2>
<p>The journey begins with firsthand experience. Before making any significant commitments:</p>
<ul>
<li><p>Conduct a pilot program with a small test group</p>
</li>
<li><p>Document initial impressions and pain points</p>
</li>
<li><p>Set clear metrics for success</p>
</li>
<li><p>Compare the tool against existing solutions</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Pro Tip</strong>: Create a standardized evaluation checklist to ensure consistency across different tools and user groups.</div>
</div>

<h2 id="heading-2-workflow-and-use-case-fit">2. Workflow and Use Case Fit</h2>
<p>Alignment with existing workflows is crucial for successful adoption:</p>
<ul>
<li><p>Map out current processes and identify integration points</p>
</li>
<li><p>Assess the learning curve for different user groups</p>
</li>
<li><p>Evaluate compatibility with existing tools and systems</p>
</li>
<li><p>Consider scalability for future needs</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Remember: The best AI tool is one that enhances rather than disrupts your existing workflows.</div>
</div>

<h2 id="heading-3-feedback-collection-and-assessment">3. Feedback Collection and Assessment</h2>
<p>Systematic feedback gathering provides valuable insights:</p>
<ul>
<li><p>Implement structured feedback channels</p>
</li>
<li><p>Conduct regular user surveys</p>
</li>
<li><p>Track usage patterns and pain points</p>
</li>
<li><p>Create a feedback loop for continuous improvement</p>
</li>
</ul>
<h3 id="heading-key-areas-for-feedback">Key Areas for Feedback:</h3>
<ul>
<li><p>User experience and interface</p>
</li>
<li><p>Integration challenges</p>
</li>
<li><p>Performance metrics</p>
</li>
<li><p>Time savings and efficiency gains</p>
</li>
</ul>
<h2 id="heading-4-advantages-and-disadvantages-analysis">4. Advantages and Disadvantages Analysis</h2>
<p>Conduct a thorough analysis of pros and cons:</p>
<h3 id="heading-advantages-to-consider">Advantages to Consider:</h3>
<ul>
<li><p>Performance improvements</p>
</li>
<li><p>Cost savings</p>
</li>
<li><p>Time efficiency</p>
</li>
<li><p>Quality enhancement</p>
</li>
<li><p>Innovation potential</p>
</li>
</ul>
<h3 id="heading-disadvantages-to-watch-for">Disadvantages to Watch For:</h3>
<ul>
<li><p>Implementation challenges</p>
</li>
<li><p>Training requirements</p>
</li>
<li><p>Technical limitations</p>
</li>
<li><p>Privacy concerns</p>
</li>
<li><p>Hidden costs</p>
</li>
</ul>
<h2 id="heading-5-budget-considerations">5. Budget Considerations</h2>
<p>Financial planning must account for:</p>
<ul>
<li><p>Initial purchase or subscription costs</p>
</li>
<li><p>Implementation expenses</p>
</li>
<li><p>Training and onboarding costs</p>
</li>
<li><p>Maintenance and updates</p>
</li>
<li><p>Potential ROI calculations</p>
</li>
</ul>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text"><strong>Important</strong>: Factor in both direct and indirect costs for a complete financial picture.</div>
</div>

<h2 id="heading-6-making-the-decision">6. Making the Decision</h2>
<p>The decision-making process should be:</p>
<ul>
<li><p>Data-driven</p>
</li>
<li><p>Transparent</p>
</li>
<li><p>Inclusive of key stakeholders</p>
</li>
<li><p>Aligned with organizational goals</p>
</li>
</ul>
<h3 id="heading-decision-criteria-checklist">Decision Criteria Checklist:</h3>
<ul>
<li><p>[ ] Meets core requirements</p>
</li>
<li><p>[ ] Falls within budget constraints</p>
</li>
<li><p>[ ] Shows clear ROI potential</p>
</li>
<li><p>[ ] Has stakeholder buy-in</p>
</li>
<li><p>[ ] Aligns with security policies</p>
</li>
</ul>
<h2 id="heading-7-implementation-strategy">7. Implementation Strategy</h2>
<p>A successful rollout requires:</p>
<ul>
<li><p>Clear communication plan</p>
</li>
<li><p>Phased implementation approach</p>
</li>
<li><p>Dedicated support resources</p>
</li>
<li><p>Training programs</p>
</li>
<li><p>Success metrics tracking</p>
</li>
</ul>
<h3 id="heading-implementation-timeline-example">Implementation Timeline Example:</h3>
<ol>
<li><p>Phase 1: Initial setup and configuration</p>
</li>
<li><p>Phase 2: Pilot group deployment</p>
</li>
<li><p>Phase 3: Department-wide rollout</p>
</li>
<li><p>Phase 4: Organization-wide implementation</p>
</li>
<li><p>Phase 5: Optimization and scaling</p>
</li>
</ol>
<h2 id="heading-8-productivity-enhancement">8. Productivity Enhancement</h2>
<p>Maximize value through:</p>
<ul>
<li><p>Regular training sessions</p>
</li>
<li><p>Best practices documentation</p>
</li>
<li><p>Use case repositories</p>
</li>
<li><p>Success story sharing</p>
</li>
<li><p>Continuous optimization</p>
</li>
</ul>
<h2 id="heading-the-continuous-improvement-loop">The Continuous Improvement Loop</h2>
<p>Remember that tool selection is not a one-time decision. Establish a regular review cycle to:</p>
<ul>
<li><p>Reassess tool performance</p>
</li>
<li><p>Evaluate new alternatives</p>
</li>
<li><p>Update requirements</p>
</li>
<li><p>Optimize processes</p>
</li>
<li><p>Consider emerging technologies</p>
</li>
</ul>
<p>Selecting and implementing the right AI tool is a complex but manageable process when approached systematically. By following this framework, organizations can make informed decisions that drive real value and maintain competitive advantage in an AI-driven world.</p>
<h3 id="heading-next-steps">Next Steps</h3>
<ol>
<li><p>Create your evaluation checklist</p>
</li>
<li><p>Identify key stakeholders</p>
</li>
<li><p>Set clear success metrics</p>
</li>
<li><p>Begin your pilot program</p>
</li>
<li><p>Document your journey</p>
</li>
</ol>
<p>Remember: The goal is not just to implement AI, but to enhance your organization's capabilities in meaningful and measurable ways.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">What has been your experience with implementing AI tools in your organization? Share your insights and lessons below?</div>
</div>]]></content:encoded></item><item><title><![CDATA[Self-Host NextJS in Fly.io: A Quick Try]]></title><description><![CDATA[Recently, Lee Robinson of Vercel gave a tutorial on how to self-host NextJS in a VPS. He used DigitalOcean. I also tried it in Hetzner using Ubuntu instance and it worked.
The example used Docker to deploy NextJS. It has all the dependencies like Ngi...]]></description><link>https://blog.donvitocodes.com/self-host-nextjs-in-flyio-a-quick-try</link><guid isPermaLink="true">https://blog.donvitocodes.com/self-host-nextjs-in-flyio-a-quick-try</guid><category><![CDATA[Next.js]]></category><category><![CDATA[fly.io]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Sun, 13 Oct 2024 07:07:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728802175074/d407fb75-cd3a-4330-9577-b4fcea616ef3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, Lee Robinson of Vercel gave a tutorial on how to self-host NextJS in a VPS. He used DigitalOcean. I also tried it in Hetzner using Ubuntu instance and it worked.</p>
<p>The <a target="_blank" href="https://github.com/leerob/next-self-host">example</a> used Docker to deploy NextJS. It has all the dependencies like Nginx and a Postgres database, also run in Docker. It used docker-compose to run all the docker containers required for the demo.</p>
<p>I read the code and I saw that the Dockerfile was pretty straightforward. So I thought since it is just running in docker, it should work in fly.io too, so I tried. I made a simpler example with only the NextJS app and tried to deploy it in Fly.io.</p>
<p><a target="_blank" href="http://Fly.io">Fly.io</a> is a platform where you can deploy your app easily. I was also able deploy an API route within NextJS which is called by a page as a client component. This is just a quick test for simple apps. I am not sure if all NextJS features can be supported. As for this test, I just needed the API routes to work to see if server-side rendering works as well.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=3PFsH14VxVU">https://www.youtube.com/watch?v=3PFsH14VxVU</a></div>
<p> </p>
<h2 id="heading-the-nextjs-app">The NextJS App</h2>
<p>I created a new NextJS app using this command.</p>
<pre><code class="lang-bash">npx create-next-app@latest
</code></pre>
<p>Just answer the questions and it’ll create a fresh NextJS app for you.</p>
<h3 id="heading-add-a-new-page-in-srcappblogpagetsx">Add a new page in /src/app/blog/page.tsx</h3>
<pre><code class="lang-typescript"><span class="hljs-string">'use client'</span>;

<span class="hljs-keyword">import</span> { useState, useEffect } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>;
<span class="hljs-keyword">import</span> Image <span class="hljs-keyword">from</span> <span class="hljs-string">'next/image'</span>;
<span class="hljs-keyword">import</span> Link <span class="hljs-keyword">from</span> <span class="hljs-string">'next/link'</span>;

<span class="hljs-keyword">interface</span> BlogPost {
  id: <span class="hljs-built_in">number</span>;
  title: <span class="hljs-built_in">string</span>;
  content: <span class="hljs-built_in">string</span>;
  author: <span class="hljs-built_in">string</span>;
  date: <span class="hljs-built_in">string</span>;
  image: <span class="hljs-built_in">string</span>;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">BlogPage</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> [blogPosts, setBlogPosts] = useState&lt;BlogPost[]&gt;([]);

  useEffect(<span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fetchBlogPosts</span>(<span class="hljs-params"></span>) </span>{
      <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">'/api/blogs'</span>);
        <span class="hljs-keyword">if</span> (!response.ok) {
          <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">'Failed to fetch blog posts'</span>);
        }
        <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> response.json();
        setBlogPosts(data);
      } <span class="hljs-keyword">catch</span> (error) {
        <span class="hljs-built_in">console</span>.error(<span class="hljs-string">'Error fetching blog posts:'</span>, error);
      }
    }

    fetchBlogPosts();
  }, []);

  <span class="hljs-keyword">return</span> (
    &lt;div className=<span class="hljs-string">"container mx-auto px-4 py-8"</span>&gt;
      &lt;h1 className=<span class="hljs-string">"text-3xl font-bold mb-8"</span>&gt;Blog Posts&lt;/h1&gt;
      &lt;Link href=<span class="hljs-string">"/"</span> className=<span class="hljs-string">"text-blue-500 hover:underline mb-4 inline-block"</span>&gt;
        Back to Home
      &lt;/Link&gt;
      {blogPosts.map(<span class="hljs-function">(<span class="hljs-params">post</span>) =&gt;</span> (
        &lt;article key={post.id} className=<span class="hljs-string">"mb-8 p-6 bg-white rounded-lg shadow-md"</span>&gt;
          &lt;div className=<span class="hljs-string">"flex flex-col md:flex-row"</span>&gt;
            &lt;div className=<span class="hljs-string">"md:w-1/3 mb-4 md:mb-0 md:mr-6"</span>&gt;
              &lt;Image
                src={post.image || <span class="hljs-string">'https://picsum.photos/600'</span>}
                alt={post.title}
                width={<span class="hljs-number">100</span>}
                height={<span class="hljs-number">100</span>}
                className=<span class="hljs-string">"w-full h-auto rounded-lg"</span>
              /&gt;
            &lt;/div&gt;
            &lt;div className=<span class="hljs-string">"md:w-2/3"</span>&gt;
              &lt;h2 className=<span class="hljs-string">"text-2xl font-semibold mb-2"</span>&gt;{post.title}&lt;/h2&gt;
              &lt;p className=<span class="hljs-string">"text-gray-600 mb-4"</span>&gt;
                By {post.author} on {<span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(post.date).toLocaleDateString()}
              &lt;/p&gt;
              &lt;p className=<span class="hljs-string">"text-gray-800 mb-4"</span>&gt;{post.content.slice(<span class="hljs-number">0</span>, <span class="hljs-number">200</span>)}...&lt;/p&gt;
              &lt;Link href=<span class="hljs-string">"https://donvitocodes.com"</span> passHref&gt;
                &lt;span className=<span class="hljs-string">"inline-block bg-blue-500 text-white px-4 py-2 rounded hover:bg-blue-600 transition-colors duration-200"</span>&gt;
                  Read More
                &lt;/span&gt;
              &lt;/Link&gt;
            &lt;/div&gt;
          &lt;/div&gt;
        &lt;/article&gt;
      ))}
      &lt;footer className=<span class="hljs-string">"mt-8 text-center"</span>&gt;
        &lt;a href=<span class="hljs-string">"https://donvitocodes.com"</span> target=<span class="hljs-string">"_blank"</span> rel=<span class="hljs-string">"noopener noreferrer"</span> className=<span class="hljs-string">"text-blue-600 hover:underline"</span>&gt;
          Visit donvitocodes.com
        &lt;/a&gt;
      &lt;/footer&gt;
    &lt;/div&gt;
  );
}
</code></pre>
<h3 id="heading-then-add-an-api-route-which-was-called-within-the-blog-page">Then, add an API route which was called within the blog page.</h3>
<pre><code class="lang-typescript"><span class="hljs-comment">// saved in src/app/api/blogs/route.ts</span>

<span class="hljs-keyword">import</span> { NextResponse } <span class="hljs-keyword">from</span> <span class="hljs-string">'next/server'</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> revalidate = <span class="hljs-number">0</span>;

<span class="hljs-comment">// Static data for blog posts</span>
<span class="hljs-keyword">const</span> blogPosts = [
  {
    id: <span class="hljs-number">1</span>,
    title: <span class="hljs-string">'Getting Started with Next.js'</span>,
    content: <span class="hljs-string">'Next.js is a powerful React framework that enables you to build server-side rendered and statically generated web applications using React. It provides a great developer experience with features like automatic code splitting, optimized performance, and easy deployment. Whether you\'re building a simple blog or a complex web application, Next.js offers the tools and flexibility to create fast, scalable, and SEO-friendly websites. Did you know that Next.js was created by Vercel and first released in 2016? Since then, it has become one of the most popular React frameworks, used by companies like Netflix, TikTok, and Twitch.'</span>,
    author: <span class="hljs-string">'DonvitoCodes'</span>,
    date: <span class="hljs-string">'2023-05-01'</span>,
    image: <span class="hljs-string">'https://picsum.photos/600'</span>
  },
  {
    id: <span class="hljs-number">2</span>,
    title: <span class="hljs-string">'The Power of TypeScript'</span>,
    content: <span class="hljs-string">'TypeScript adds static typing to JavaScript, bringing a new level of robustness and maintainability to your code. With TypeScript, you can catch errors early in the development process, improve code readability, and enhance IDE support. It\'s particularly beneficial for large-scale applications, where it can significantly reduce bugs and improve team collaboration. Learn how TypeScript can supercharge your JavaScript development and make your code more reliable and easier to refactor. Fun fact: TypeScript was developed by Microsoft and is used in many of their projects, including Visual Studio Code. It\'s estimated that using TypeScript can reduce bug density by up to 15%!'</span>,
    author: <span class="hljs-string">'DonvitoCodes'</span>,
    date: <span class="hljs-string">'2023-05-15'</span>,
    image: <span class="hljs-string">'https://picsum.photos/600'</span>
  },
  {
    id: <span class="hljs-number">3</span>,
    title: <span class="hljs-string">'Building APIs with Next.js'</span>,
    content: <span class="hljs-string">'Next.js provides an easy way to create API routes, allowing you to build full-stack applications with ease. With API routes, you can handle server-side logic, interact with databases, and create RESTful endpoints all within your Next.js project. This seamless integration between frontend and backend makes Next.js an excellent choice for developers looking to build modern web applications. Discover how to leverage Next.js API routes to create powerful, efficient, and scalable APIs for your projects. Did you know that Next.js API routes support serverless functions out of the box? This means you can deploy your APIs without managing server infrastructure, leading to cost savings and improved scalability.'</span>,
    author: <span class="hljs-string">'DonvitoCodes'</span>,
    date: <span class="hljs-string">'2023-06-01'</span>,
    image: <span class="hljs-string">'https://picsum.photos/600'</span>
  },
  {
    id: <span class="hljs-number">4</span>,
    title: <span class="hljs-string">'Deploying Next.js with Fly.io'</span>,
    content: <span class="hljs-string">'Fly.io is a platform for deploying Next.js apps that offers a seamless experience for developers. With Fly.io, you can easily deploy your Next.js applications globally, ensuring low latency and high availability for users around the world. This platform provides features like automatic SSL, custom domains, and easy scaling options. Learn how to leverage Fly.io\'s powerful infrastructure to deploy your Next.js applications quickly and efficiently, and discover best practices for optimizing your app\'s performance in a production environment. Interestingly, Fly.io uses a unique approach called "edge computing" to run your applications closer to your users, potentially reducing latency by up to 50% compared to traditional cloud hosting.'</span>,
    author: <span class="hljs-string">'DonvitoCodes'</span>,
    date: <span class="hljs-string">'2024-10-13'</span>,
    image: <span class="hljs-string">'https://picsum.photos/600'</span>
  },
  {
    id: <span class="hljs-number">5</span>,
    title: <span class="hljs-string">'The Rise of JAMstack'</span>,
    content: <span class="hljs-string">'JAMstack, which stands for JavaScript, APIs, and Markup, is revolutionizing web development. This modern architecture pattern focuses on decoupling the frontend from the backend, resulting in faster, more secure, and highly scalable websites. By leveraging static site generators, serverless functions, and content delivery networks, JAMstack sites can achieve incredible performance and developer productivity. Did you know that sites built with JAMstack can be up to 10 times faster than traditional dynamic websites? Explore how JAMstack can transform your web development process and deliver lightning-fast experiences to your users.'</span>,
    author: <span class="hljs-string">'DonvitoCodes'</span>,
    date: <span class="hljs-string">'2024-11-01'</span>,
    image: <span class="hljs-string">'https://picsum.photos/600'</span>
  },
  {
    id: <span class="hljs-number">6</span>,
    title: <span class="hljs-string">'Mastering CSS Grid Layout'</span>,
    content: <span class="hljs-string">'CSS Grid Layout is a powerful tool for creating complex, responsive layouts with ease. It introduces a two-dimensional grid system to CSS, allowing for more flexible and intuitive design possibilities. With CSS Grid, you can create magazine-style layouts, complex dashboard interfaces, and responsive designs without relying on external frameworks. A fascinating tidbit: CSS Grid was in development for nearly 20 years before being widely adopted by modern browsers in 2017. Learn how to harness the full potential of CSS Grid and revolutionize your approach to web layout design.'</span>,
    author: <span class="hljs-string">'DonvitoCodes'</span>,
    date: <span class="hljs-string">'2024-11-15'</span>,
    image: <span class="hljs-string">'https://picsum.photos/600'</span>
  }
];

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">GET</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'API /blogs called at '</span> + <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>().toISOString());
  <span class="hljs-keyword">return</span> NextResponse.json(blogPosts);
}
</code></pre>
<p>You can also clone my repo and just change the files for your purpose</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/donvito/next-self-host-fly">https://github.com/donvito/next-self-host-fly</a></div>
<p> </p>
<h3 id="heading-create-a-dockerfile">Create a Dockerfile</h3>
<p><a target="_blank" href="http://Fly.io">Fly.io</a> uses Docker to build and run your application. Here's an example Dockerfile for a Next.js app. I just copied this from <a class="user-mention" href="https://hashnode.com/@leerob">Lee Robinson</a>’s <a target="_blank" href="https://github.com/leerob/next-self-host">next-self-host</a> demo.</p>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> oven/bun:alpine AS base

<span class="hljs-comment"># Stage 1: Install dependencies</span>
<span class="hljs-keyword">FROM</span> base AS deps
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">COPY</span><span class="bash"> package.json bun.lockb ./</span>
<span class="hljs-keyword">RUN</span><span class="bash"> bun install --frozen-lockfile</span>

<span class="hljs-comment"># Stage 2: Build the application</span>
<span class="hljs-keyword">FROM</span> base AS builder
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=deps /app/node_modules ./node_modules</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>
<span class="hljs-keyword">RUN</span><span class="bash"> bun run build </span>

<span class="hljs-comment"># Stage 3: Production server</span>
<span class="hljs-keyword">FROM</span> base AS runner
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">ENV</span> NODE_ENV=production
<span class="hljs-keyword">COPY</span><span class="bash"> --from=builder /app/public ./public</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=builder /app/.next/standalone ./</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=builder /app/.next/static ./.next/static</span>

<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">3000</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"bun"</span>, <span class="hljs-string">"run"</span>, <span class="hljs-string">"server.js"</span>]</span>
</code></pre>
<h2 id="heading-deploying-to-flyiohttpflyio">Deploying to <a target="_blank" href="http://Fly.io">Fly.io</a></h2>
<h3 id="heading-1-install-the-fly-cli">1. Install the Fly CLI</h3>
<p>If you haven't already, install the Fly CLI using Homebrew if you’re using a Mac. If you are using Linux or Windows, you can follow their guide <a target="_blank" href="https://fly.io/docs/flyctl/install/">https://fly.io/docs/flyctl/install/</a></p>
<pre><code class="lang-bash">brew install flyctl
</code></pre>
<h3 id="heading-3-initialize-your-flyiohttpflyio-app">3. Initialize Your <a target="_blank" href="http://Fly.io">Fly.io</a> App</h3>
<p>Navigate to your NextJS project directory and run:</p>
<pre><code class="lang-bash">fly launch
</code></pre>
<p>This command will guide you through creating a new app on <a target="_blank" href="http://Fly.io">Fly.io</a> and generate a <code>fly.toml</code> configuration file.</p>
<h3 id="heading-4-create-a-flyio-launch-configuration">4. Create a fly.io launch configuration</h3>
<p>The <code>fly.toml</code> file is automatically created when you run</p>
<pre><code class="lang-bash">fly launch
</code></pre>
<p>It will generate this fly.toml configuration similar to this. This is how a fly.toml file looks. You can adjust configuration based on your needs.</p>
<pre><code class="lang-toml"><span class="hljs-comment"># fly.toml app configuration file generated for nextjs-fly-polished-resonance-4569 on 2024-10-13T11:00:00+08:00</span>
<span class="hljs-comment">#</span>
<span class="hljs-comment"># See https://fly.io/docs/reference/configuration/ for information about how to use this file.</span>
<span class="hljs-comment">#</span>

<span class="hljs-attr">app</span> = <span class="hljs-string">'nextjs-fly-polished-resonance-4569'</span>
<span class="hljs-attr">primary_region</span> = <span class="hljs-string">'syd'</span>

<span class="hljs-section">[build]</span>

<span class="hljs-section">[http_service]</span>
  <span class="hljs-attr">internal_port</span> = <span class="hljs-number">3000</span>
  <span class="hljs-attr">force_https</span> = <span class="hljs-literal">true</span>
  <span class="hljs-attr">auto_stop_machines</span> = <span class="hljs-string">'stop'</span>
  <span class="hljs-attr">auto_start_machines</span> = <span class="hljs-literal">true</span>
  <span class="hljs-attr">min_machines_running</span> = <span class="hljs-number">0</span>
  <span class="hljs-attr">processes</span> = [<span class="hljs-string">'app'</span>]

<span class="hljs-section">[[vm]]</span>
  <span class="hljs-attr">memory</span> = <span class="hljs-string">'1gb'</span>
  <span class="hljs-attr">cpu_kind</span> = <span class="hljs-string">'shared'</span>
  <span class="hljs-attr">cpus</span> = <span class="hljs-number">1</span>
</code></pre>
<h3 id="heading-5-run-fly-deploy-for-rebuilding-and-deploying-your-code">5. Run fly deploy for rebuilding and deploying your code</h3>
<pre><code class="lang-bash">fly deploy
</code></pre>
<h2 id="heading-issues">Issues</h2>
<p>Although the app is working fine, I am getting this error for the production deployment. It seems sharp hasn’t been installed properly. I’m still trying to figure it out. sharp is used for NextJS optimisation and is a required dependency. If you know the solution, you can comment below.</p>
<pre><code class="lang-typescript">syd [info]API /blogs called at <span class="hljs-number">2024</span><span class="hljs-number">-10</span><span class="hljs-number">-13</span>T04:<span class="hljs-number">48</span>:<span class="hljs-number">31.159</span>Z
syd [info] ⨯ <span class="hljs-built_in">Error</span>: <span class="hljs-string">'sharp'</span> is required to be installed <span class="hljs-keyword">in</span> standalone mode <span class="hljs-keyword">for</span> the image optimization to <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">correctly</span>. <span class="hljs-title">Read</span> <span class="hljs-title">more</span> <span class="hljs-title">at</span>: <span class="hljs-title">https</span>://<span class="hljs-title">nextjs</span>.<span class="hljs-title">org</span>/<span class="hljs-title">docs</span>/<span class="hljs-title">messages</span>/<span class="hljs-title">sharp</span>-<span class="hljs-title">missing</span>-<span class="hljs-title">in</span>-<span class="hljs-title">production</span>
<span class="hljs-title">syd</span> [<span class="hljs-title">info</span>]<span class="hljs-title">API</span> /<span class="hljs-title">blogs</span> <span class="hljs-title">called</span> <span class="hljs-title">at</span> 2024-10-13<span class="hljs-title">T04</span>:48:37.506<span class="hljs-title">Z</span></span>
</code></pre>
<h2 id="heading-thats-it">That’s it!</h2>
<p>This is just a quick test to deploy simple NextJS apps to fly.io if you want to self-host it. I didn’t test all NextJS features so I this is not for production use yet.</p>
]]></content:encoded></item><item><title><![CDATA[I am a Developer and This is My AI Story]]></title><description><![CDATA[I've been working with AI - using it for productivity and developing apps with it. I started using AI when ChatGPT was released in November 2022. I was really amazed on how it can answer questions and produce code. I am someone who would try out a ne...]]></description><link>https://blog.donvitocodes.com/i-am-a-developer-and-this-is-my-ai-story</link><guid isPermaLink="true">https://blog.donvitocodes.com/i-am-a-developer-and-this-is-my-ai-story</guid><category><![CDATA[generative ai]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Tue, 08 Oct 2024 06:01:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728367147927/e5a479d8-6f19-4614-8eb5-ff09b21a3fc7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've been working with AI - using it for productivity and developing apps with it. I started using AI when ChatGPT was released in November 2022. I was really amazed on how it can answer questions and produce code. I am someone who would try out a new technology and I continued using ChatGPT. I started with just asking questions about anything. It was fun and enjoyed it.</p>
<p>After a while, I realised that this thing -- ChatGPT was really smart and can be useful for helping out with some tasks as a developer. So I tried asking it to help out with code. During that time, its knowledge was not yet up to date and it didn't know the latest version of Go. But that didn't stop me from using it for coding because with its stock knowledge, it was able to help me with some trivial coding tasks. It also helped me understand concepts which I don't understand.</p>
<p>On June 21, 2022, <a target="_blank" href="https://github.com/features/copilot">Github Copilot</a> was launched. Wow, I realised this was huge. I felt that this product was really a game-changer. Since Github Copilot is integrated with Visual Studio Code and Jetbrains IDEs as a plugin, using AI in coding would be seamless. You guessed it right, I tried it out. Since Github Copilot just had a free trial, I eventually paid for it. I used it for my coding and I got really good help from its auto-complete feature.</p>
<p>So this was the point when Github Copilot and ChatGPT were my go-to AI tools for coding.</p>
<p>After using Github Copilot for a few months, I realized that its auto-complete feature somewhat broke for me. I didn't really understand what happened. So I just used the chat feature that came with it. After a while, I realised I was not getting my money's worth with Github Copilot and just used ChatGPT for coding. So I just copy pasted code to and from ChatGPT. It was not convenient but it saved me some money from the Copilot subscription.</p>
<h3 id="heading-github-copilot-was-exciting-but-it-was-not-really-a-game-changer-bummer">Github Copilot was exciting but it was not really a game-changer. Bummer!</h3>
<p>I moved on and continued with ChatGPT.</p>
<p>Along the way, someone suggested to me to use <a target="_blank" href="https://codeium.com/">Codeium</a>. Codeium has the same exact functionality as Github Copilot. It works with Visual Studio Code, Jetbrains and even neovim! The free tier was really generous and I never hit the limit. I was happy!</p>
<p>Back at work, I was assigned to help our founder for a technical demo in Tech Week Singapore last year. That was around Oct 2023. Deadline was tight so I had to use AI to build it. Coding an AI product using AI was indeed an interesting use case. So the product was called HomerGPT, the idea was to build an AI chat application which is knowledgeable about Singapore real estate, our domain knowledge. Our tech stack was NextJS for the frontend and Go for our backend APIs. Luckily, I found a Vercel template which does exactly what we needed in the UI and backend part. The only catch is that the example was in TypeScript. Sheesh, I don't even know TypeScript!</p>
<p>I needed APIs for the backend and the examples OpenAI integration which I've tried out were in Python. Python is not even in our backend tech stack. Luckily, I found a great library, Vercel's <a target="_blank" href="https://sdk.vercel.ai/">AI SDK</a>. So I decided to do the app in one language, TypeScript. I also deployed the APIs in NextJS, together with the frontend code which was not ideal. But, it was the best option due to the limited time.</p>
<p>After days of hard work, we were able to do the demo of HomerGPT in TechWeek Singapore successfully.</p>
<p>Job's not done yet! Remember I deployed the backend APIs in NextJS? I had to move that out. I thought of moving it to Go but thinking forward, it would not be a good decision. AI is mostly delivered with Python so I migrated the TypeScript code to Python. I asked ChatGPT to migrate the code from TypeScript to Python for me. Voila, it took me only a a week to migrate all the code from TypeScript to Python. Yes, that's from code migration to production.</p>
<h3 id="heading-i-was-impressed-with-the-iterations-and-relationship-i-had-with-these-ai-tools-ai-was-a-real-assistant-ai-even-did-the-more-difficult-work-imagine-that">I was impressed with the iterations and relationship I had with these AI tools. AI was a real assistant. AI even did the more difficult work, imagine that!</h3>
<p>A few months passed, I left my job and started my learning journey. I left my previous role since I wanted to take a career break and accelerate my learning and experience with AI. Frontend was my weakest skill so I learned the basics of React and NextJS, a fullstack framework since I wanted to build AI apps. I learned NextJS to the point that I can start a project and deploy it to production. I thought that was good enough to go live with an app.</p>
<p>While I was learning and engaging with the community on X, previously known as Twitter, I came to know about very good tools to produce new user interfaces and components and even a new IDE.</p>
<h3 id="heading-im-not-a-frontend-developer-how-can-i-make-ui">I'm not a frontend developer! How can I make UI?!</h3>
<p>Fret not, here are some AI tools I discovered.</p>
<h3 id="heading-vercel-v0">Vercel v0</h3>
<p><a target="_blank" href="https://v0.dev">v0.dev</a> and is developed by Vercel. These web based tools were designed to allow you to generate user interface components fast. Generated components uses <a target="_blank" href="https://tailwindcss.com/">Tailwind CSS</a> and <a target="_blank" href="https://ui.shadcn.com/">shadcn</a>. v0 also has a, <a target="_blank" href="https://v0.dev/chat">v0.dev/chat</a> chat feature which allows developers to interate while generating the components. Adding the UI components to your project is also a breeze. It's just one command you need to execute and the UI component gets added to your project.</p>
<p>I created a website with just a single prompt using v0. Check it out.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=lQVKbEYNyT4">https://www.youtube.com/watch?v=lQVKbEYNyT4</a></div>
<p> </p>
<h3 id="heading-ia"> </h3>
<p>Claude AI</p>
<p><a target="_blank" href="https://claude.ai/new">Claude AI</a> also released their artifact feature with its most powerful model for coding Claude 3.5 Sonnet. This killed ChatGPT for me. I don't use ChatGPT anymore and even canceled my subscription. You can learn more about it <a target="_blank" href="https://www.anthropic.com/news/claude-3-5-sonnet">here</a>.  </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/rHqk0ZGb6qo?si=TWA8u4YjpFJUNZJZ">https://youtu.be/rHqk0ZGb6qo?si=TWA8u4YjpFJUNZJZ</a></div>
<p> </p>
<h3 id="heading-hold-your-breath">Hold your breath!</h3>
<p>When ChatGPT was launched, everyone was amazed. As a developer, I had the same feeling of excitement when I used Cursor the first time. I said to myself "Sh*t! this is the real game-changer!"</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://x.com/rickyrobinett/status/1825581674870055189">https://x.com/rickyrobinett/status/1825581674870055189</a></div>
<p> </p>
<h3 id="heading-cursor-ai">Cursor AI</h3>
<p>Cursor has inline code editing which makes it really seamless to work with code. It's not perfect but most of the time gets the job done. It also has a Composer feature which allows you to generate multiple files. The user experience is great and the Cursor team just nailed it!  </p>
<p>Check out this compilation of Cursor applications</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://x.com/ai_for_success/status/1843291106894577838">https://x.com/ai_for_success/status/1843291106894577838</a></div>
<p> </p>
<h3 id="heading-ia-1"> </h3>
<p>Walk the talk will you?!</p>
<p>Yes, of course!</p>
<p>Here's a list of apps I've built with AI.</p>
<ol>
<li><p><a target="_blank" href="https://www.aidreamphoto.com/">AI Dream Photo</a> - Unleash your dreams with AI Dream Photo! Transform into a successful businessman, an explorer on Mars, a tourist in Paris, or a rockstar on stage. Create stunning photos for your social media profiles, professional headshots, or just for fun to share with friends.</p>
</li>
<li><p><a target="_blank" href="https://ohmyhome.com/en-sg/blog/your-ai-property-expert-is-here-introducing-homergpt-beta/">HomerGPT</a> - HomerGPT is an AI-powered chat application designed specifically for the Singapore real estate market. It provides users with personalized property recommendations and insights, making the home-buying process more efficient and informed.</p>
</li>
<li><p><a target="_blank" href="https://condosph.com">CondosPH</a> - This is my website for my real estate side jobs. It is showcases properties for resale and a home loan calculator.</p>
</li>
<li><p><a target="_blank" href="https://www.donvitocodes.com/assets/ai-prompt-creator/">AI Prompt Creator</a> - This tool assists users in creating and editing prompts for ChatGPT. It has more than 300 pre-defined prompts.</p>
</li>
<li><p><a target="_blank" href="https://chromewebstore.google.com/detail/ai-prompt-library-by-donv/mplkgmmdongmokckekhojjnooopkphlf">AI Prompt Library</a> - This Chrome extension allows you to save and reuse prompts you frequently use in ChatGPT.</p>
</li>
<li><p><a target="_blank" href="https://pdf-to-images.streamlit.app/">PDF to Images</a> - This web application allows users to convert PDF documents into image formats quickly. It is particularly useful for extracting visual content from PDFs, making it easier to share and utilize in other projects.</p>
</li>
<li><p><a target="_blank" href="https://v0gen.vercel.app/">Curated v0.dev Generations</a> - I made this tool so we can easily discover and reuse really good v0.dev generations.</p>
</li>
<li><p><a target="_blank" href="https://github.com/donvito/react-native-openai-demo">React Native Chat App using OpenAI</a> - This is a React Native project starter for a mobile application built with Expo. It includes OpenAI integration.</p>
</li>
</ol>
<p>So that's it. This is my AI journey. I hope you get to learn a few things out of it.</p>
<p>You can reach me at my website <a target="_blank" href="https://www.donvitocodes.com">donvitocodes.com</a> if you'd like me to share my story with your team.</p>
<p>Cheers and let's enjoy our AI journey!</p>
]]></content:encoded></item><item><title><![CDATA[How to Build Your AI App: A Comprehensive Startup Simulator Course]]></title><description><![CDATA[Hi everyone!
Months back, I initially thought of doing a generic course to teach AI to non-technical people. I validated my idea and spoke to a few friends and previous colleagues. Based on my observation, people still have qualms on using AI. For th...]]></description><link>https://blog.donvitocodes.com/how-to-build-your-ai-app-a-comprehensive-startup-simulator-course</link><guid isPermaLink="true">https://blog.donvitocodes.com/how-to-build-your-ai-app-a-comprehensive-startup-simulator-course</guid><category><![CDATA[courses]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[Startups]]></category><category><![CDATA[indie-hacker]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Fri, 27 Sep 2024 04:59:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727413027363/39c94f46-a41a-4ada-a0a2-9d1965881ec5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi everyone!</p>
<p>Months back, I initially thought of doing a generic course to teach AI to non-technical people. I validated my idea and spoke to a few friends and previous colleagues. Based on my observation, people still have qualms on using AI. For the ordinary people I talked to, they don't see the usefulness of AI yet. I am also not able to convince them on how they can use it for their daily life or work. Some people simply don't have the time to try it out. So in short, I dropped the idea...</p>
<h3 id="heading-motivation">Motivation</h3>
<p>On the brighter side, I have spoken to a some people in my discovery calls and realise that they are asking for ideas on how to build an app, learn about the process and technical side of things. They have entrepreneurial mindsets and they are looking to see how to earn from developing products, not necessarily AI apps.</p>
<p>So now I am drafting a course titled "How to Build Your AI App: A Comprehensive Startup Simulator Course". Basically, in this course, I'll be sharing what I learned for a few months in my indie hacking journey and of course my past experiences.</p>
<h3 id="heading-target-audience">Target Audience</h3>
<p>I aim to make the course useful for my target audience:</p>
<ul>
<li><p>Developers who want to build stuff on their own or with a small team</p>
</li>
<li><p>Non-technical / business people who wants to understand the technical side of building an app</p>
</li>
<li><p>Students who are interested in building their own app for their startup idea</p>
</li>
</ul>
<p>The course will have two(2) parts</p>
<ol>
<li><p>Typical I talk, you listen - the boring part</p>
</li>
<li><p>Role Playing simulations (optional - only for those interested in having fun!) To apply what they learned, participants group into small teams and will take on roles to ideate/conceptualise an AI product and build a prototype using AI tools. The important part is to apply the concepts we learned during the course.</p>
</li>
</ol>
<p>I also want to ask. Do you think people will be interested in part 2 - the simulation part?</p>
<h3 id="heading-course-delivery">Course Delivery</h3>
<p>The course will be totally online so anyone from anywhere in the world can participate. I have a great tool to use for this type of collaboration which I can't share right now since it can be my competitive advantage.</p>
<p>The course is also a great networking opportunity since I plan to attract technical and non-technical people.</p>
<p>I attached my mind map of the outline below. Let me know if I need to add something you would like to know more about building an app.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727412448889/506d5d2e-2ef6-443a-952b-a80e1e6c3772.png" alt class="image--center mx-auto" /></p>
<p>That's all for now.</p>
<p>You can subscribe here so you can get updates about the course by email.</p>
<p>Cheers!</p>
<p>P.S. You can also share this to someone who might be interested.</p>
]]></content:encoded></item><item><title><![CDATA[OpenAI Structured Outputs: Why is it important?]]></title><description><![CDATA[Building reliable software often means ensuring that the data we work with follows strict formats or "schemas". Before the recent OpenAI update about structured output support, it didn’t guarantee that the data would always fit the exact structure de...]]></description><link>https://blog.donvitocodes.com/openai-structured-outputs-why-is-it-important</link><guid isPermaLink="true">https://blog.donvitocodes.com/openai-structured-outputs-why-is-it-important</guid><category><![CDATA[Python]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[openai]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Thu, 15 Aug 2024 06:44:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723704096890/ecfc77ae-a716-4142-9f98-b760c00de415.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building reliable software often means ensuring that the data we work with follows strict formats or "schemas". Before the recent OpenAI update about structured output support, it didn’t guarantee that the data would always fit the exact structure developers needed.</p>
<p>On August 6, 2024, OpenAI announced support for <a target="_blank" href="https://openai.com/index/introducing-structured-outputs-in-the-api/">Structured Outputs in the API</a>. This new feature is designed to ensure that the data generated by OpenAI models perfectly matches the JSON Schemas you provide. Structured Outputs make it easier for developers to create applications that require precise and structured data, whether it’s for building smart assistants, automating data entry, or creating complex workflows.</p>
<p>In OpenAI tests, the latest model, gpt-4o-2024-08-06, using Structured Outputs, achieved a perfect score in following complex JSON schemas, compared to less than 40% for the older gpt-4-0613 model. This shows just how much more reliable and accurate Structured Outputs are in the latest update.</p>
<h3 id="heading-why-is-structured-outputs-important">Why is Structured Outputs important?</h3>
<p>Consistency in AI-generated outputs is crucial when integrating with different systems. When building applications that rely on AI, having data that adheres to a specific structure ensures smoother and more reliable integration. Without consistent output, developers often need to add extra layers of validation and correction, which can complicate the process and introduce potential points of failure. Structured Outputs eliminate this uncertainty by guaranteeing that the data generated by the AI models fits the exact schema you’ve defined. This makes it easier to plug AI-generated data directly into your systems without worrying about mismatches or errors.</p>
<p>This consistency also leads to more stable integrations. When you can trust that the data from OpenAI will always follow the same structure, you can confidently build workflows that depend on that data. Whether it’s feeding information into a database, triggering actions in another application, or performing complex multi-step processes, Structured Outputs ensure that every piece of data will be just as you expect, reducing the need for additional processing and making your entire system more efficient and reliable.</p>
<p>In this post, we'll make a simple example of how to use Structured Outputs. Basically, here is a high level overview of how it works.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723701510337/f92122c6-0202-4065-8fe4-35dcc68890c0.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-study-plan-generator-example">Study Plan Generator Example</h3>
<p>In this example, we'll make a simple Study Plan Generator sends a request to the OpenAI model <code>gpt-4o-2024-08-06</code> to create a study plan. The OpenAI model returns a consistent JSON format as we specified in the <strong>response_format</strong> parameter. We pass the pydantic model <strong>StudyPlan so OpenAI</strong> will know that it needs to adhere to this format when sending back the response.</p>
<pre><code class="lang-python">completion = client.beta.chat.completions.parse(
    model=<span class="hljs-string">"gpt-4o-2024-08-06"</span>,
    max_tokens=<span class="hljs-number">1024</span>,
    temperature=<span class="hljs-number">0</span>,
    messages=[
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"system"</span>, <span class="hljs-string">"content"</span>: <span class="hljs-string">"You are a helpful math tutor and is an expert in creating study plans for homeschooling kids."</span>},
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>, <span class="hljs-string">"content"</span>: <span class="hljs-string">"create a study plan to learn Algebra basics to my 14 year old"</span>},
    ],
    response_format=StudyPlan,
)
</code></pre>
<p>Then the JSON data from the OpenAI response, containing all the structured information of the study plan, is then used to produce both a PDF document and an HTML file.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723702881599/ac6f5d7b-9dc7-46f0-8db3-8b2eb0df4888.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-key-components">Key Components</h3>
<ol>
<li><p><strong>Pydantic Models (</strong><code>Lesson</code>, <code>Module</code>, <code>StudyPlan</code>):</p>
<ul>
<li><p>The code defines several Pydantic models to structure the data:</p>
<pre><code class="lang-python">  <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Lesson</span>(<span class="hljs-params">BaseModel</span>):</span>
      title: str
      description: str
      duration_minutes: int

  <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Module</span>(<span class="hljs-params">BaseModel</span>):</span>
      title: str
      description: str
      objectives: List[str]
      lessons: List[Lesson]    

  <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">StudyPlan</span>(<span class="hljs-params">BaseModel</span>):</span>
      name: str
      start_date: str
      end_date: str
      notes: List[str]
      modules: List[Module]
</code></pre>
</li>
<li><p>These models ensure that the AI's output conforms to specific attributes, such as <code>title</code>, <code>description</code>, and <code>lessons</code> for each module in the study plan. This structuring is crucial for consistent and reliable data integration.</p>
</li>
</ul>
</li>
<li><p><strong>Interacting with OpenAI</strong>:</p>
<ul>
<li><p>The code initializes an OpenAI client and sends a request to generate a study plan.</p>
<pre><code class="lang-python">  client = OpenAI()

  completion = client.beta.chat.completions.parse(
      model=<span class="hljs-string">"gpt-4o-2024-08-06"</span>,
      max_tokens=<span class="hljs-number">1024</span>,
      temperature=<span class="hljs-number">0</span>,
      messages=[
          {<span class="hljs-string">"role"</span>: <span class="hljs-string">"system"</span>, <span class="hljs-string">"content"</span>: <span class="hljs-string">"You are a helpful math tutor and is an expert in creating study plans for homeschooling kids."</span>},
          {<span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>, <span class="hljs-string">"content"</span>: <span class="hljs-string">"create a study plan to learn Algebra basics to my 14 year old"</span>},
      ],
      response_format=StudyPlan,
  )
</code></pre>
</li>
<li><p>Here, the model is instructed to act as a math tutor and create a study plan for Algebra. The use of <code>response_format=StudyPlan</code> indicates that the model’s output should strictly follow the <code>StudyPlan</code> schema defined earlier.</p>
</li>
</ul>
</li>
<li><p><strong>Parsing and Validating the Response</strong>:</p>
<ul>
<li><p>Once the AI generates the study plan, the response is parsed and validated against the <code>StudyPlan</code> schema:</p>
<pre><code class="lang-python">  event = completion.choices[<span class="hljs-number">0</span>].message.parsed

  <span class="hljs-comment"># Convert the Pydantic model instance to JSON (without indent)</span>
  json_string = event.json()

  <span class="hljs-comment"># Validate the JSON string against the StudyPlan schema</span>
  <span class="hljs-keyword">try</span>:
      study_plan_dict = json.loads(json_string)
      study_plan = StudyPlan.model_validate(study_plan_dict)
      print(<span class="hljs-string">"The JSON string is valid and adheres to the StudyPlan schema."</span>)
  <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
      print(<span class="hljs-string">"Validation error:"</span>, e)
</code></pre>
</li>
<li><p>The <code>StudyPlan.model_validate()</code> function checks if the JSON string conforms to the expected schema, ensuring that the output is both valid and consistent.</p>
</li>
</ul>
</li>
<li><p><strong>Converting and Saving the JSON</strong>:</p>
<ul>
<li><p>The validated study plan is then converted to a formatted JSON string and saved to a file:</p>
<pre><code class="lang-python">  <span class="hljs-comment"># Format JSON string with indentation using the built-in json module</span>
  formatted_json = json.dumps(json.loads(json_string), indent=<span class="hljs-number">4</span>)

  <span class="hljs-keyword">with</span> open(<span class="hljs-string">"study_plan.json"</span>, <span class="hljs-string">'w'</span>) <span class="hljs-keyword">as</span> json_file:
      json_file.write(formatted_json)

  print(formatted_json)
</code></pre>
</li>
<li><p>This step ensures that the AI-generated study plan is saved in a human-readable and structured format, ready for further use or sharing.</p>
</li>
</ul>
</li>
<li><p><strong>Generating HTML and PDF Outputs</strong>:</p>
<ul>
<li><p>Finally, the code converts the JSON data into HTML and PDF formats:</p>
<pre><code class="lang-python">  generate_html(formatted_json, <span class="hljs-string">"study_plan.html"</span>)
  generate_pdf(formatted_json, <span class="hljs-string">"study_plan.pdf"</span>)
</code></pre>
</li>
<li><p>These functions likely take the structured JSON and format it into visually appealing documents, enhancing the accessibility and utility of the study plan.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-full-source-code"><strong>Full Source Code</strong></h3>
<p><strong>main.py</strong></p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> pydantic <span class="hljs-keyword">import</span> BaseModel
<span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> List, Optional
<span class="hljs-keyword">from</span> openai <span class="hljs-keyword">import</span> OpenAI
<span class="hljs-keyword">import</span> json

<span class="hljs-keyword">from</span> pydantic <span class="hljs-keyword">import</span> BaseModel
<span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> List, Optional

<span class="hljs-keyword">from</span> output_generators <span class="hljs-keyword">import</span> generate_html, generate_pdf

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Lesson</span>(<span class="hljs-params">BaseModel</span>):</span>
    title: str
    description: str
    duration_minutes: int

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Module</span>(<span class="hljs-params">BaseModel</span>):</span>
    title: str
    description: str
    objectives: List[str]
    lessons: List[Lesson]    

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">StudyPlan</span>(<span class="hljs-params">BaseModel</span>):</span>
    name: str
    start_date: str
    end_date: str
    notes: List[str]
    modules: List[Module]

client = OpenAI()

completion = client.beta.chat.completions.parse(
    model=<span class="hljs-string">"gpt-4o-2024-08-06"</span>,
    max_tokens=<span class="hljs-number">1024</span>,
    temperature=<span class="hljs-number">0</span>,
    messages=[
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"system"</span>, <span class="hljs-string">"content"</span>: <span class="hljs-string">"You are a helpful math tutor and is an expert in creating study plans for homeschooling kids."</span>},
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>, <span class="hljs-string">"content"</span>: <span class="hljs-string">"create a study plan to learn Algebra basics to my 14 year old"</span>},
    ],
    response_format=StudyPlan,
)

event = completion.choices[<span class="hljs-number">0</span>].message.parsed

<span class="hljs-comment"># Convert the Pydantic model instance to JSON (without indent)</span>
json_string = event.json()

<span class="hljs-comment"># Validate the JSON string against the StudyPlan schema</span>
<span class="hljs-keyword">try</span>:
    study_plan_dict = json.loads(json_string)
    study_plan = StudyPlan.model_validate(study_plan_dict)
    print(<span class="hljs-string">"The JSON string is valid and adheres to the StudyPlan schema."</span>)
<span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
    print(<span class="hljs-string">"Validation error:"</span>, e)


<span class="hljs-comment"># Format JSON string with indentation using the built-in json module</span>
formatted_json = json.dumps(json.loads(json_string), indent=<span class="hljs-number">4</span>)

<span class="hljs-keyword">with</span> open(<span class="hljs-string">"study_plan.json"</span>, <span class="hljs-string">'w'</span>) <span class="hljs-keyword">as</span> json_file:
    json_file.write(formatted_json)

print(formatted_json)

generate_html(formatted_json, <span class="hljs-string">"study_plan.html"</span>)
generate_pdf(formatted_json, <span class="hljs-string">"study_plan.pdf"</span>)
</code></pre>
<p><strong>output_generators.py</strong></p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> fpdf <span class="hljs-keyword">import</span> FPDF
<span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> Dict
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">from</span> pathlib <span class="hljs-keyword">import</span> Path

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">generate_html</span>(<span class="hljs-params">json_data: str, output_file: str</span>) -&gt; <span class="hljs-keyword">None</span>:</span>
    <span class="hljs-comment"># Parse JSON data</span>
    study_plan = json.loads(json_data)

    <span class="hljs-comment"># Start building the HTML content with CSS styling</span>
    html_content = <span class="hljs-string">f"""
    &lt;html&gt;
    &lt;head&gt;
        &lt;title&gt;<span class="hljs-subst">{study_plan[<span class="hljs-string">'name'</span>]}</span>&lt;/title&gt;
        &lt;style&gt;
            body {{
                font-family: 'Arial', sans-serif;
                background-color: #f4f4f4;
                margin: 0;
                padding: 20px;
                line-height: 1.6;
            }}
            .container {{
                max-width: 800px;
                margin: auto;
                background: white;
                padding: 20px;
                box-shadow: 0 0 10px rgba(0, 0, 0, 0.1);
            }}
            h1 {{
                color: #333;
                font-size: 24px;
                border-bottom: 2px solid #ddd;
                padding-bottom: 10px;
            }}
            h2 {{
                color: #555;
                font-size: 20px;
                margin-top: 40px;
                border-bottom: 1px solid #ddd;
                padding-bottom: 10px;
            }}
            h3 {{
                color: #666;
                font-size: 18px;
                margin-top: 20px;
            }}
            p {{
                color: #444;
            }}
            ul {{
                list-style-type: disc;
                padding-left: 20px;
                color: #444;
            }}
            li {{
                margin-bottom: 10px;
            }}
            .module {{
                margin-bottom: 30px;
                padding: 10px;
                background-color: #f9f9f9;
            }}
            .lesson {{
                margin-left: 20px;
                padding-left: 10px;
            }}
            .label {{
                font-weight: bold;
                color: #333;
                margin-top: 10px;
            }}
        &lt;/style&gt;
    &lt;/head&gt;
    &lt;body&gt;
        &lt;div class="container"&gt;
            &lt;h1&gt;Study Plan: <span class="hljs-subst">{study_plan[<span class="hljs-string">'name'</span>]}</span>&lt;/h1&gt;
            &lt;p&gt;&lt;strong&gt;Start Date:&lt;/strong&gt; <span class="hljs-subst">{study_plan[<span class="hljs-string">'start_date'</span>]}</span>&lt;/p&gt;
            &lt;p&gt;&lt;strong&gt;End Date:&lt;/strong&gt; <span class="hljs-subst">{study_plan[<span class="hljs-string">'end_date'</span>]}</span>&lt;/p&gt;
            &lt;h2&gt;Notes&lt;/h2&gt;
            &lt;ul&gt;
    """</span>

    <span class="hljs-keyword">for</span> note <span class="hljs-keyword">in</span> study_plan[<span class="hljs-string">'notes'</span>]:
        html_content += <span class="hljs-string">f"&lt;li&gt;<span class="hljs-subst">{note}</span>&lt;/li&gt;"</span>

    html_content += <span class="hljs-string">"&lt;/ul&gt;&lt;h2&gt;Modules&lt;/h2&gt;"</span>

    <span class="hljs-keyword">for</span> module <span class="hljs-keyword">in</span> study_plan[<span class="hljs-string">'modules'</span>]:
        html_content += <span class="hljs-string">f"""
        &lt;div class="module"&gt;            
            &lt;h3&gt;<span class="hljs-subst">{module[<span class="hljs-string">'title'</span>]}</span>&lt;/h3&gt;
            &lt;p&gt;<span class="hljs-subst">{module[<span class="hljs-string">'description'</span>]}</span>&lt;/p&gt;
            &lt;div class="label"&gt;Objectives&lt;/div&gt;
            &lt;ul&gt;
        """</span>
        <span class="hljs-keyword">for</span> objective <span class="hljs-keyword">in</span> module[<span class="hljs-string">'objectives'</span>]:
            html_content += <span class="hljs-string">f"&lt;li&gt;<span class="hljs-subst">{objective}</span>&lt;/li&gt;"</span>
        html_content += <span class="hljs-string">"&lt;/ul&gt; &lt;div class='label'&gt;Lessons&lt;/div&gt;&lt;div class='lesson'&gt;&lt;ul&gt;"</span>

        <span class="hljs-keyword">for</span> lesson <span class="hljs-keyword">in</span> module[<span class="hljs-string">'lessons'</span>]:
            html_content += <span class="hljs-string">f"""
            &lt;li&gt;
                &lt;strong&gt;<span class="hljs-subst">{lesson[<span class="hljs-string">'title'</span>]}</span>&lt;/strong&gt;&lt;br&gt;                
                <span class="hljs-subst">{lesson[<span class="hljs-string">'description'</span>]}</span>&lt;br&gt;
                &lt;div class="label"&gt;Duration:&lt;/div&gt;
                &lt;em&gt;<span class="hljs-subst">{lesson[<span class="hljs-string">'duration_minutes'</span>]}</span> minutes&lt;/em&gt;
            &lt;/li&gt;
            """</span>

        html_content += <span class="hljs-string">"&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;"</span>

    html_content += <span class="hljs-string">"""
        &lt;/div&gt;
    &lt;/body&gt;
    &lt;/html&gt;
    """</span>

    <span class="hljs-comment"># Write HTML content to the specified output file</span>
    <span class="hljs-keyword">with</span> open(output_file, <span class="hljs-string">'w'</span>) <span class="hljs-keyword">as</span> file:
        file.write(html_content)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">generate_pdf</span>(<span class="hljs-params">json_data: str, output_file: str</span>) -&gt; <span class="hljs-keyword">None</span>:</span>
    <span class="hljs-comment"># Parse JSON data</span>
    study_plan = json.loads(json_data)

    <span class="hljs-comment"># Create PDF document</span>
    pdf = FPDF()
    pdf.set_auto_page_break(auto=<span class="hljs-literal">True</span>, margin=<span class="hljs-number">15</span>)
    pdf.add_page()

    <span class="hljs-comment"># Add title</span>
    pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">'B'</span>, <span class="hljs-number">16</span>)
    pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"Study Plan: <span class="hljs-subst">{study_plan[<span class="hljs-string">'name'</span>]}</span>"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>, <span class="hljs-string">'C'</span>)

    <span class="hljs-comment"># Add study plan details</span>
    pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">''</span>, <span class="hljs-number">12</span>)
    pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"Start Date: <span class="hljs-subst">{study_plan[<span class="hljs-string">'start_date'</span>]}</span>"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)
    pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"End Date: <span class="hljs-subst">{study_plan[<span class="hljs-string">'end_date'</span>]}</span>"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)

    <span class="hljs-comment"># Add notes</span>
    pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">'B'</span>, <span class="hljs-number">14</span>)
    pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">"Notes:"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)
    pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">''</span>, <span class="hljs-number">12</span>)
    <span class="hljs-keyword">for</span> note <span class="hljs-keyword">in</span> study_plan[<span class="hljs-string">'notes'</span>]:
        pdf.multi_cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"- <span class="hljs-subst">{note}</span>"</span>)

    <span class="hljs-comment"># Add modules with visual differentiation</span>
    pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">'B'</span>, <span class="hljs-number">14</span>)
    pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">"Modules:"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)

    <span class="hljs-keyword">for</span> module <span class="hljs-keyword">in</span> study_plan[<span class="hljs-string">'modules'</span>]:
        pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">'B'</span>, <span class="hljs-number">12</span>)
        pdf.set_fill_color(<span class="hljs-number">240</span>, <span class="hljs-number">240</span>, <span class="hljs-number">240</span>)  <span class="hljs-comment"># Light gray background for module title</span>
        pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"Module Title: <span class="hljs-subst">{module[<span class="hljs-string">'title'</span>]}</span>"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>, <span class="hljs-string">'L'</span>, fill=<span class="hljs-literal">True</span>)

        pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">''</span>, <span class="hljs-number">12</span>)
        pdf.multi_cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"Description: <span class="hljs-subst">{module[<span class="hljs-string">'description'</span>]}</span>"</span>)

        pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">'I'</span>, <span class="hljs-number">12</span>)
        pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">"Objectives:"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)
        <span class="hljs-keyword">for</span> objective <span class="hljs-keyword">in</span> module[<span class="hljs-string">'objectives'</span>]:
            pdf.multi_cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"- <span class="hljs-subst">{objective}</span>"</span>)

        pdf.set_font(<span class="hljs-string">"Arial"</span>, <span class="hljs-string">''</span>, <span class="hljs-number">12</span>)
        pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">"Lessons:"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)

        <span class="hljs-keyword">for</span> lesson <span class="hljs-keyword">in</span> module[<span class="hljs-string">'lessons'</span>]:
            pdf.set_fill_color(<span class="hljs-number">255</span>, <span class="hljs-number">240</span>, <span class="hljs-number">240</span>)  <span class="hljs-comment"># Slightly colored background for lessons</span>
            pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"  Lesson Title: <span class="hljs-subst">{lesson[<span class="hljs-string">'title'</span>]}</span>"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>, <span class="hljs-string">'L'</span>, fill=<span class="hljs-literal">True</span>)
            pdf.multi_cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"  Description: <span class="hljs-subst">{lesson[<span class="hljs-string">'description'</span>]}</span>"</span>)
            pdf.cell(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>, <span class="hljs-string">f"  Duration: <span class="hljs-subst">{lesson[<span class="hljs-string">'duration_minutes'</span>]}</span> minutes"</span>, <span class="hljs-number">0</span>, <span class="hljs-number">1</span>)
            pdf.ln(<span class="hljs-number">5</span>)

        pdf.ln(<span class="hljs-number">10</span>)

    <span class="hljs-comment"># Save the PDF to the specified output file</span>
    pdf.output(output_file)
</code></pre>
<p><strong>requirements.txt</strong></p>
<pre><code class="lang-python">annotated-types==<span class="hljs-number">0.7</span><span class="hljs-number">.0</span>
anyio==<span class="hljs-number">4.4</span><span class="hljs-number">.0</span>
certifi==<span class="hljs-number">2024.7</span><span class="hljs-number">.4</span>
distro==<span class="hljs-number">1.9</span><span class="hljs-number">.0</span>
fpdf==<span class="hljs-number">1.7</span><span class="hljs-number">.2</span>
h11==<span class="hljs-number">0.14</span><span class="hljs-number">.0</span>
httpcore==<span class="hljs-number">1.0</span><span class="hljs-number">.5</span>
httpx==<span class="hljs-number">0.27</span><span class="hljs-number">.0</span>
idna==<span class="hljs-number">3.7</span>
jiter==<span class="hljs-number">0.5</span><span class="hljs-number">.0</span>
openai==<span class="hljs-number">1.40</span><span class="hljs-number">.6</span>
pydantic==<span class="hljs-number">2.8</span><span class="hljs-number">.2</span>
pydantic_core==<span class="hljs-number">2.20</span><span class="hljs-number">.1</span>
sniffio==<span class="hljs-number">1.3</span><span class="hljs-number">.1</span>
tqdm==<span class="hljs-number">4.66</span><span class="hljs-number">.5</span>
typing_extensions==<span class="hljs-number">4.12</span><span class="hljs-number">.2</span>
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Source code also downloadable in my gist <a target="_blank" href="https://gist.github.com/donvito/d3927c09029770592b5644f04942b811">https://gist.github.com/donvito/d3927c09029770592b5644f04942b811</a></div>
</div>

<h3 id="heading-running-the-example">Running the example</h3>
<p>Installing the dependencies using <strong>pip install</strong> then run the application.</p>
<pre><code class="lang-python">pip install -r requirements.txt
</code></pre>
<pre><code class="lang-bash">python main.py
</code></pre>
<p>With Structured Outputs, we can now confidently build and integrate AI-driven solutions that are both precise and dependable, saving time and reducing the need for extensive validation or error handling. This new feature of OpenAI will make integration with different systems seamless and reliable.</p>
]]></content:encoded></item><item><title><![CDATA[Vercel AI SDK 3.3: Powering Up AI Development]]></title><description><![CDATA[Vercel has just announced the release of Vercel AI SDK 3.3, bringing a host of new features and improvements to their toolkit for building AI applications with JavaScript and TypeScript. This update introduces exciting capabilities that will enhance ...]]></description><link>https://blog.donvitocodes.com/vercel-ai-sdk-33-powering-up-ai-development</link><guid isPermaLink="true">https://blog.donvitocodes.com/vercel-ai-sdk-33-powering-up-ai-development</guid><category><![CDATA[generative ai]]></category><category><![CDATA[Vercel]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[Svelte]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Wed, 07 Aug 2024 05:09:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723007510228/39a51df9-f646-4bc7-93f2-9ceab17607b9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Vercel has just announced the release of <a target="_blank" href="https://vercel.com/blog/vercel-ai-sdk-3-3">Vercel AI SDK 3.3</a>, bringing a host of new features and improvements to their toolkit for building AI applications with JavaScript and TypeScript. This update introduces exciting capabilities that will enhance developers' ability to create sophisticated AI-powered applications.</p>
<h2 id="heading-key-updates">Key Updates</h2>
<h3 id="heading-1-tracing-experimental">1. Tracing (Experimental)</h3>
<p>Vercel AI SDK now supports OpenTelemetry for tracing, offering developers crucial insights into their AI applications. This feature allows for better observability, helping developers understand timing, token usage, prompts, and response content for individual model calls.</p>
<h3 id="heading-2-multi-modal-file-attachments-experimental">2. Multi-Modal File Attachments (Experimental)</h3>
<p>The new version introduces support for sending file attachments along with messages in AI chat applications. This feature enhances the <code>useChat</code> React hook, allowing developers to easily implement functionalities like image uploads in their chatbots.</p>
<h3 id="heading-3-useobject-hook-experimental">3. useObject Hook (Experimental)</h3>
<p>This new React hook enables streaming of structured object generation directly to the client. It's particularly useful for creating dynamic interfaces that display JSON objects as they're being streamed, opening up possibilities for more interactive and responsive AI applications.</p>
<h3 id="heading-4-additional-llm-settings">4. Additional LLM Settings</h3>
<p>Vercel has expanded the toolkit's language model capabilities with new features:</p>
<ul>
<li><p>JSON schema support for tools and structured object generation</p>
</li>
<li><p>Stop sequences for more control over text generation</p>
</li>
<li><p>Custom headers support for enhanced flexibility in API calls</p>
</li>
</ul>
<h2 id="heading-what-this-means-for-developers">What This Means for Developers</h2>
<ol>
<li><p><strong>Improved Observability</strong>: The tracing feature will make it easier to debug and optimize AI applications, especially when dealing with the non-deterministic nature of language models.</p>
</li>
<li><p><strong>Enhanced User Experiences</strong>: Multi-modal attachments and the <code>useObject</code> hook will allow developers to create more sophisticated and interactive AI interfaces.</p>
</li>
<li><p><strong>Greater Flexibility</strong>: The additional LLM settings provide more control and customization options when working with language models.</p>
</li>
</ol>
<p>Vercel AI SDK represents a significant step forward in making AI development more accessible and powerful for JavaScript and TypeScript developers. Whether you're building chatbots, content generation tools, or complex AI-powered applications, these new features provide the tools needed to create more robust and interactive experiences.</p>
<p>As AI continues to play an increasingly important role in web development, tools like Vercel AI SDK are becoming indispensable for developers looking to stay at the forefront of technology. With this release, Vercel continues to demonstrate its commitment to empowering developers in the AI space.</p>
]]></content:encoded></item><item><title><![CDATA[Using Groq's Whisper API and Go for Transcribing Audio to Text]]></title><description><![CDATA[In an age where communication happens through various mediums, the ability to convert spoken language into written text has become increasingly valuable. Speech-to-text technology has a wide range of applications that can enhance productivity, access...]]></description><link>https://blog.donvitocodes.com/using-groqs-whisper-api-and-go-for-transcribing-audio-to-text</link><guid isPermaLink="true">https://blog.donvitocodes.com/using-groqs-whisper-api-and-go-for-transcribing-audio-to-text</guid><category><![CDATA[generative ai]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[speech to text]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Mon, 29 Jul 2024 03:44:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1722224471283/a5e40441-7920-425b-9ed0-0cd646c98b7c.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In an age where communication happens through various mediums, the ability to convert spoken language into written text has become increasingly valuable. Speech-to-text technology has a wide range of applications that can enhance productivity, accessibility, and user experience.</p>
<h2 id="heading-speech-to-text-use-cases">Speech to Text Use Cases</h2>
<p>Here are some examples of how you might use Speech To Text</p>
<ol>
<li><p>Meeting Transcription: Automatically create text records of business meetings, conferences, or interviews.</p>
</li>
<li><p>Closed Captioning: Generate captions for videos, making content accessible to deaf or hard-of-hearing viewers.</p>
</li>
<li><p>Voice Commands: Enable voice control in apps or smart home devices by converting spoken commands to text.</p>
</li>
<li><p>Podcast Transcription: Create text versions of podcasts to improve SEO and make content searchable.</p>
</li>
<li><p>Voice Notes: Allow users to dictate notes or memos, which are then converted to text for easy editing and sharing.</p>
</li>
<li><p>Call Center Analytics: Transcribe customer service calls to analyze common issues, sentiment, and agent performance.</p>
</li>
<li><p>Medical Dictation: Help healthcare professionals create patient notes or reports by speaking rather than typing.</p>
</li>
<li><p>Legal Documentation: Transcribe court proceedings, depositions, or client interviews for accurate record-keeping.</p>
</li>
<li><p>Educational Content: Convert lectures or educational videos into text for students to review or for creating study materials.</p>
</li>
<li><p>Accessibility Tools: Help people with disabilities interact with digital content by converting audio to text.</p>
</li>
</ol>
<p>These are just a few examples of how speech-to-text technology can be applied across various industries and scenarios. Now, let's dive into how Groq's Whisper API can help you implement this powerful technology in your own applications</p>
<h2 id="heading-what-can-whisper-api-do">What can Whisper API Do?</h2>
<p>Groq's <a target="_blank" href="https://console.groq.com/docs/speech-text">Whisper API</a> can turn audio into text.</p>
<h2 id="heading-how-to-use-it">How to Use It</h2>
<p>To use the API, you'll need to call use this API:</p>
<ul>
<li>For transcription: <a target="_blank" href="https://api.groq.com/openai/v1/audio/transcriptions"><code>https://api.groq.com/openai/v1/audio/transcriptions</code></a></li>
</ul>
<p>The API uses a model called <code>whisper-large-v3</code>. This model is very good at transcribing audio.</p>
<h2 id="heading-things-to-remember">Things to Remember</h2>
<ul>
<li><p>You can only upload files up to 25 MB in size.</p>
</li>
<li><p>The API can work with these types of audio files: mp3, mp4, mpeg, mpga, m4a, wav, and webm.</p>
</li>
<li><p>If your file has more than one audio track (like a video with different language tracks), the API will only use the first track.</p>
</li>
</ul>
<h2 id="heading-preparing-your-audio-file">Preparing Your Audio File</h2>
<p>Before using the API, you might need to change your audio file a bit. The API works best with audio that has a certain quality (16,000 Hz mono). Here's how you can change your file using a tool called ffmpeg:</p>
<pre><code class="lang-go">Copyffmpeg \
  -i &lt;your file&gt; \
  -ar <span class="hljs-number">16000</span> \
  -ac <span class="hljs-number">1</span> \
  -<span class="hljs-keyword">map</span> <span class="hljs-number">0</span>:a: \
  &lt;output file name&gt;
</code></pre>
<p>Replace <code>&lt;your file&gt;</code> with the name of your audio file, and <code>&lt;output file name&gt;</code> with what you want to call the new file.</p>
<h2 id="heading-sample-code">Sample Code</h2>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bytes"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"io"</span>
    <span class="hljs-string">"mime/multipart"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"os"</span>
)

<span class="hljs-keyword">const</span> (
    apiBaseUrl = <span class="hljs-string">"https://api.groq.com/openai"</span>
    STTWhisperLargeV3 = <span class="hljs-string">"whisper-large-v3"</span>
)

<span class="hljs-keyword">type</span> GroqClient <span class="hljs-keyword">struct</span> {
    ApiKey <span class="hljs-keyword">string</span>
}

<span class="hljs-keyword">type</span> GroqMessage <span class="hljs-keyword">struct</span> {
    Role    <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"role"`</span>
    Content <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"content"`</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {

    apiKey := os.Getenv(<span class="hljs-string">"GROQ_API_KEY"</span>)
    gclient := &amp;GroqClient{
        ApiKey: apiKey,
    }

    transcriptText, err := gclient.TranscribeAudio(<span class="hljs-string">"audio.m4a"</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span>{
        <span class="hljs-built_in">panic</span>(err)
    }
    fmt.Println(<span class="hljs-string">"transcriptText"</span>, transcriptText)


    fmt.Println(<span class="hljs-string">"Transcript saved to transcript.txt"</span>)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(g *GroqClient)</span> <span class="hljs-title">TranscribeAudio</span><span class="hljs-params">(audioFile <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(<span class="hljs-keyword">string</span>, error)</span></span> {

    <span class="hljs-comment">// File to upload</span>
    filepath := audioFile
    file, err := os.Open(filepath)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(<span class="hljs-string">"Error opening file:"</span>, err)
        <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>, err
    }
    <span class="hljs-keyword">defer</span> file.Close()

    transcriptionUrl := <span class="hljs-string">"/v1/audio/transcriptions"</span>
    finalUrl := fmt.Sprintf(<span class="hljs-string">"%s%s"</span>, apiBaseUrl, transcriptionUrl)

    <span class="hljs-comment">// Prepare form data</span>
    body := &amp;bytes.Buffer{}
    writer := multipart.NewWriter(body)

    <span class="hljs-comment">// Add file field</span>
    part, err := writer.CreateFormFile(<span class="hljs-string">"file"</span>, filepath)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(<span class="hljs-string">"Error creating form file:"</span>, err)
        <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>, err
    }
    _, err = io.Copy(part, file)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(<span class="hljs-string">"Error copying file:"</span>, err)
        <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>, err
    }

    temp := <span class="hljs-string">"0"</span>
    responseFormat := <span class="hljs-string">"json"</span>
    language := <span class="hljs-string">"en"</span>

    <span class="hljs-comment">// Add other fields</span>
    _ = writer.WriteField(<span class="hljs-string">"model"</span>, STTWhisperLargeV3)
    _ = writer.WriteField(<span class="hljs-string">"temperature"</span>, temp)
    _ = writer.WriteField(<span class="hljs-string">"response_format"</span>, responseFormat)
    _ = writer.WriteField(<span class="hljs-string">"language"</span>, language)

    <span class="hljs-comment">// Close writer</span>
    err = writer.Close()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(<span class="hljs-string">"Error closing writer:"</span>, err)
        <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>, err
    }
    <span class="hljs-comment">// Create POST request</span>
    req, err := http.NewRequest(http.MethodPost, finalUrl, body)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>, err
    }

    <span class="hljs-comment">// Set headers</span>
    req.Header.Set(<span class="hljs-string">"Content-Type"</span>, writer.FormDataContentType())
    req.Header.Set(<span class="hljs-string">"Authorization"</span>, fmt.Sprintf(<span class="hljs-string">"bearer %s"</span>, g.ApiKey)) <span class="hljs-comment">// Replace with your actual access token</span>

    <span class="hljs-comment">// Create HTTP client</span>
    client := &amp;http.Client{}

    <span class="hljs-comment">// Send request</span>
    resp, err := client.Do(req)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(<span class="hljs-string">"Error sending request:"</span>, err)
        <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>, err
    }
    <span class="hljs-keyword">defer</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(Body io.ReadCloser)</span></span> {
        err1 := Body.Close()
        <span class="hljs-keyword">if</span> err1 != <span class="hljs-literal">nil</span> {

        }
    }(resp.Body)

    <span class="hljs-comment">// Read response body</span>
    responseBody, err := io.ReadAll(resp.Body)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(<span class="hljs-string">"Error reading response body:"</span>, err)
        <span class="hljs-keyword">return</span> <span class="hljs-string">""</span>, err
    }

    r := <span class="hljs-keyword">string</span>(responseBody)

    <span class="hljs-keyword">return</span> r, <span class="hljs-literal">nil</span>

}
</code></pre>
<p>With Groq's Whisper API, turning speech into text is now easier than ever. Whether you're building an app for meeting transcriptions, creating accessibility tools, or any of the other use cases we've explored, this technology can help you achieve your goals.  </p>
<p>Give it a try in your next project and see how it can transform your approach to handling audio content!</p>
<hr />
<p>If you're interested in learning more about developing apps with Generative AI, subscribe to my blog for more tutorials and sample code. You can also follow me in <a target="_blank" href="https://x.com/donvito"><strong>Twitter</strong></a> or connect with me in <a target="_blank" href="https://www.linkedin.com/in/melvinvivas/"><strong>LinkedIn</strong></a>.</p>
]]></content:encoded></item><item><title><![CDATA[Integrate AI into Java Apps — No Python Required]]></title><description><![CDATA[In this blog post, we'll explore how to leverage Spring AI to create a simple chat API using OpenAI's language model and Java. Before we go into the example, let's see why this is exciting!
Java Developers, Rejoice!
No need to learn a new language!
A...]]></description><link>https://blog.donvitocodes.com/integrate-ai-into-java-apps-no-python-required</link><guid isPermaLink="true">https://blog.donvitocodes.com/integrate-ai-into-java-apps-no-python-required</guid><category><![CDATA[Java]]></category><category><![CDATA[openai]]></category><category><![CDATA[Springboot]]></category><category><![CDATA[Spring framework]]></category><category><![CDATA[Spring]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Fri, 19 Jul 2024 09:22:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721380883700/4b1e0f22-062a-44d3-9a68-7eb885bcbd0e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog post, we'll explore how to leverage <a target="_blank" href="https://docs.spring.io/spring-ai/reference/index.html">Spring AI</a> to create a simple chat API using OpenAI's language model and Java. Before we go into the example, let's see why this is exciting!</p>
<h2 id="heading-java-developers-rejoice">Java Developers, Rejoice!</h2>
<p><strong>No need to learn a new language!</strong></p>
<p>As a seasoned Java developer, you might think that diving into AI development requires learning a new language, such as Python. After all, Python is widely recognized for its robust AI libraries and frameworks. However, with Spring AI, Java developers can achieve AI integration without stepping outside their comfort zone.</p>
<p>Most corporate applications are built using Java due to its stability, scalability, and enterprise-level support. Integrating AI directly into these applications using Spring AI allows businesses to enhance their functionality and user experience without needing to overhaul their technology stack. It also ensures that developers can work within a familiar environment, improving productivity and reducing the learning curve.</p>
<p>Staying within the Java ecosystem means you can continue to use your existing skills and knowledge. You won't have to invest time and resources in learning a new language or adapting to different paradigms. This not only saves time but also helps maintain consistency in your codebase, making future updates and maintenance more manageable.</p>
<h2 id="heading-what-is-spring-ai">What is Spring AI?</h2>
<p>Spring AI is a library that aims to make it easier for developers to create applications with artificial intelligence features. The main goal of Spring AI is to help connect your company's data and systems with AI models in a straightforward way.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">At its core, Spring AI addresses the fundamental challenge of AI integration: <code>Connecting your enterprise Data and APIs with the AI Models</code>.</div>
</div>

<p>The library offers a set of tools and features that developers can use as building blocks for AI applications. These include support for various AI model providers like OpenAI, Microsoft, Amazon, Google, and Hugging Face. It supports different types of AI models such as chat and image generation. Spring AI is designed to be flexible, allowing developers to easily switch between different components without having to rewrite a lot of code.</p>
<h2 id="heading-spring-ai-architecture">Spring AI Architecture</h2>
<p>Here is the architecture of <strong>Spring AI</strong>*( source: official documentation)*</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721379450640/101d9616-b658-4712-adc8-483b98de60b0.jpeg" alt class="image--center mx-auto" /></p>
<p><em>Supported models</em></p>
<p>OpenAI, Microsoft, Amazon, Google, Amazon Bedrock, Hugging Face and more.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721377899366/9b2c6c61-ec26-493f-b65a-1565a0146c6a.jpeg" alt class="image--center mx-auto" /></p>
<p>Some key features of Spring AI include the ability to use AI models from different providers with a consistent interface, support for turning AI outputs into Java objects, and tools for managing and processing data for AI use. It also provides ready-to-use components that work well with Spring Boot, a popular framework for building Java applications. With these features, developers can more easily create applications that do things like answer questions based on company documents or have conversations using a company's knowledge base.</p>
<h3 id="heading-seamless-java-integration">Seamless Java Integration</h3>
<p>Spring AI is designed to integrate AI capabilities into Java applications effortlessly. It provides a straightforward way to connect with language models using Java, making it an excellent choice for those who want to keep their tech stack consistent. This library allows you to leverage the power of conversational AI without needing to rewrite your existing codebase in a different language.</p>
<p>Imagine adding a conversational interface to your enterprise application that can handle customer inquiries, provide personalized recommendations, or automate routine tasks. With Spring AI, these features can be implemented quickly and efficiently. You can focus on developing the business logic and user experience without worrying about the intricacies of a new programming language.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we begin, make sure you have the following:</p>
<ol>
<li><p>Java Development Kit (JDK) 17 or later</p>
</li>
<li><p>Maven</p>
</li>
<li><p>An OpenAI API key</p>
</li>
</ol>
<h2 id="heading-project-setup">Project Setup</h2>
<p>Let's start by setting up a new Spring Boot project with the necessary dependencies.</p>
<h3 id="heading-pomxml">pom.xml</h3>
<p>First, we'll configure our <code>pom.xml</code> file:</p>
<pre><code class="lang-xml"><span class="hljs-meta">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">project</span> <span class="hljs-attr">xmlns</span>=<span class="hljs-string">"http://maven.apache.org/POM/4.0.0"</span> <span class="hljs-attr">xmlns:xsi</span>=<span class="hljs-string">"http://www.w3.org/2001/XMLSchema-instance"</span>
    <span class="hljs-attr">xsi:schemaLocation</span>=<span class="hljs-string">"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">modelVersion</span>&gt;</span>4.0.0<span class="hljs-tag">&lt;/<span class="hljs-name">modelVersion</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">parent</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-starter-parent<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>3.3.0<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">relativePath</span>/&gt;</span> <span class="hljs-comment">&lt;!-- lookup parent from repository --&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">parent</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>com.donvitocodes<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>openai-java<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>0.0.1-SNAPSHOT<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">name</span>&gt;</span>OpenAI Java<span class="hljs-tag">&lt;/<span class="hljs-name">name</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">description</span>&gt;</span>Simple AI Application using OpenAPI Service<span class="hljs-tag">&lt;/<span class="hljs-name">description</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">properties</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">java.version</span>&gt;</span>17<span class="hljs-tag">&lt;/<span class="hljs-name">java.version</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">properties</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">dependencyManagement</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">dependencies</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.ai<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-ai-bom<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">version</span>&gt;</span>1.0.0-SNAPSHOT<span class="hljs-tag">&lt;/<span class="hljs-name">version</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">type</span>&gt;</span>pom<span class="hljs-tag">&lt;/<span class="hljs-name">type</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">scope</span>&gt;</span>import<span class="hljs-tag">&lt;/<span class="hljs-name">scope</span>&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependencies</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">dependencyManagement</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">dependencies</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-starter-web<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>

        <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-starter-actuator<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>

        <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.ai<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-ai-openai-spring-boot-starter<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>

        <span class="hljs-tag">&lt;<span class="hljs-name">dependency</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-starter-test<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">scope</span>&gt;</span>test<span class="hljs-tag">&lt;/<span class="hljs-name">scope</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">dependency</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">dependencies</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">build</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">plugins</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">plugin</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">groupId</span>&gt;</span>org.springframework.boot<span class="hljs-tag">&lt;/<span class="hljs-name">groupId</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">artifactId</span>&gt;</span>spring-boot-maven-plugin<span class="hljs-tag">&lt;/<span class="hljs-name">artifactId</span>&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">plugin</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">plugins</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">build</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">repositories</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">repository</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">id</span>&gt;</span>spring-milestones<span class="hljs-tag">&lt;/<span class="hljs-name">id</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">name</span>&gt;</span>Spring Milestones<span class="hljs-tag">&lt;/<span class="hljs-name">name</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">url</span>&gt;</span>https://repo.spring.io/milestone<span class="hljs-tag">&lt;/<span class="hljs-name">url</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">snapshots</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">enabled</span>&gt;</span>false<span class="hljs-tag">&lt;/<span class="hljs-name">enabled</span>&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">snapshots</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">repository</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">repository</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">id</span>&gt;</span>spring-snapshots<span class="hljs-tag">&lt;/<span class="hljs-name">id</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">name</span>&gt;</span>Spring Snapshots<span class="hljs-tag">&lt;/<span class="hljs-name">name</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">url</span>&gt;</span>https://repo.spring.io/snapshot<span class="hljs-tag">&lt;/<span class="hljs-name">url</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">releases</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">enabled</span>&gt;</span>false<span class="hljs-tag">&lt;/<span class="hljs-name">enabled</span>&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">releases</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">repository</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">repositories</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">project</span>&gt;</span>
</code></pre>
<p>This configuration sets up our project with Spring Boot, Spring AI, and the necessary dependencies for working with OpenAI.</p>
<h2 id="heading-application-structure">Application Structure</h2>
<p>Now, let's create the main application class <strong>Application.Java</strong></p>
<pre><code class="lang-java"><span class="hljs-keyword">package</span> com.donvitocodes;

<span class="hljs-keyword">import</span> org.springframework.boot.SpringApplication;
<span class="hljs-keyword">import</span> org.springframework.boot.autoconfigure.SpringBootApplication;

<span class="hljs-meta">@SpringBootApplication</span>
<span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Application</span> </span>{

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">void</span> <span class="hljs-title">main</span><span class="hljs-params">(String[] args)</span> </span>{
        SpringApplication.run(Application.class, args);
    }
}
</code></pre>
<h2 id="heading-configuration-class">Configuration Class</h2>
<p>Next, we'll create a configuration class <strong>Config.java</strong> to set up our ChatClient.</p>
<pre><code class="lang-java"><span class="hljs-keyword">package</span> com.donvitocodes;

<span class="hljs-keyword">import</span> org.springframework.ai.chat.client.ChatClient;
<span class="hljs-keyword">import</span> org.springframework.ai.openai.OpenAiChatOptions;
<span class="hljs-keyword">import</span> org.springframework.context.annotation.Bean;
<span class="hljs-keyword">import</span> org.springframework.context.annotation.Configuration;

<span class="hljs-meta">@Configuration</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Config</span> </span>{

    <span class="hljs-meta">@Bean</span>
    <span class="hljs-function">ChatClient <span class="hljs-title">chatClient</span><span class="hljs-params">(ChatClient.Builder builder)</span> </span>{

        String modelName = <span class="hljs-string">"gpt-4o-mini"</span>;
        <span class="hljs-keyword">float</span> temperature = <span class="hljs-number">0f</span>;
        <span class="hljs-keyword">int</span> maxTokens = <span class="hljs-number">1024</span>;

        <span class="hljs-keyword">return</span> builder
                .defaultOptions(OpenAiChatOptions.builder()
                        .withModel(modelName)
                        .withTemperature(temperature)
                        .withMaxTokens(maxTokens)
                        .withStreamUsage(<span class="hljs-keyword">true</span>)
                        .build())
                .build();
    }

}
</code></pre>
<p>This configuration sets up the ChatClient with custom options for the OpenAI model, including the model name, temperature, and maximum tokens.</p>
<h2 id="heading-creating-the-controller">Creating the Controller</h2>
<p>Finally, let's create a controller:</p>
<pre><code class="lang-java"><span class="hljs-keyword">package</span> com.donvitocodes;

<span class="hljs-keyword">import</span> org.springframework.ai.chat.client.ChatClient;
<span class="hljs-keyword">import</span> org.springframework.web.bind.annotation.GetMapping;
<span class="hljs-keyword">import</span> org.springframework.web.bind.annotation.RequestParam;
<span class="hljs-keyword">import</span> org.springframework.web.bind.annotation.RestController;
<span class="hljs-keyword">import</span> reactor.core.publisher.Flux;

<span class="hljs-keyword">import</span> java.util.Map;

<span class="hljs-meta">@RestController</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">AIController</span> </span>{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">final</span> ChatClient chatClient;

    AIController(ChatClient chatClient) {
        <span class="hljs-keyword">this</span>.chatClient = chatClient;
    }

    <span class="hljs-meta">@GetMapping("/chat")</span>
    <span class="hljs-function">Map&lt;String, String&gt; <span class="hljs-title">completion</span><span class="hljs-params">(<span class="hljs-meta">@RequestParam(value = "message", defaultValue = "")</span> String message)</span> </span>{
        <span class="hljs-keyword">return</span> Map.of(
                <span class="hljs-string">"completion"</span>,
                chatClient.prompt()
                        .user(message)
                        .call()
                        .content());
    }

    <span class="hljs-comment">// Streaming </span>
    <span class="hljs-meta">@GetMapping(value = "/chat-stream", produces = "application/stream+json")</span>
    <span class="hljs-function"><span class="hljs-keyword">public</span> Flux&lt;String&gt; <span class="hljs-title">streamCompletion</span><span class="hljs-params">(<span class="hljs-meta">@RequestParam(value = "message", defaultValue = "")</span> String message)</span> </span>{
        <span class="hljs-keyword">return</span>  chatClient.prompt()
                .user(message)
                .stream()
                .content();
    }


}
</code></pre>
<p>This controller exposes a simple GET endpoint that accepts a message parameter and returns the AI-generated response. The response is not streaming in this case.</p>
<h3 id="heading-streaming-the-chat-response">Streaming the Chat Response</h3>
<p>If you want to stream the response, this is that part of the code</p>
<pre><code class="lang-java"><span class="hljs-meta">@GetMapping(value = "/chat-stream", produces = "application/stream+json")</span>
<span class="hljs-function"><span class="hljs-keyword">public</span> Flux&lt;String&gt; <span class="hljs-title">streamCompletion</span><span class="hljs-params">(<span class="hljs-meta">@RequestParam(value = "message", defaultValue = "")</span> String message)</span> </span>{
    <span class="hljs-keyword">return</span> chatClient.prompt()
          .user(message)
          .stream()
          .content();
}
</code></pre>
<h2 id="heading-running-the-application">Running the Application</h2>
<p>To run the application, make sure you've set your OpenAI API key as an environment variable:</p>
<pre><code class="lang-xml">export SPRING_AI_OPENAI_API_KEY=your_api_key_here
</code></pre>
<p>Then, you can start the application using Maven:</p>
<pre><code class="lang-xml">mvn spring-boot:run
</code></pre>
<h2 id="heading-testing-the-api">Testing the API</h2>
<p><strong>URL:</strong><a target="_blank" href="http://localhost:8080/chat"><code>http://localhost:8080/chat</code></a></p>
<p><strong>Method:</strong><code>GET</code></p>
<p><strong>Query Parameter:</strong></p>
<ul>
<li><code>message</code> (string): The prompt or instruction to be processed by the API. For example, <code>"create a short blog post about investing in real estate"</code>.</li>
</ul>
<p>Once your application is running, you can test the API using curl or any HTTP client:</p>
<pre><code class="lang-bash">curl --location <span class="hljs-string">'http://localhost:8080/chat?message=create%20a%20short%20blog%20post%20about%20investing%20in%20real%20estate'</span>
</code></pre>
<p>Or use Postman</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721376753382/84b60211-1298-4c09-b826-dc6b078bdd2a.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-testing-with-streaming">Testing with Streaming</h3>
<p>You need to use <strong>--no-buffer</strong> or <strong>-N</strong> when doing the curl. Make sure you call the <strong>http://localhost:8080//chat-stream</strong> endpoint.</p>
<pre><code class="lang-bash">curl --no-buffer --location <span class="hljs-string">'http://localhost:8080/chat-stream?message=create%20a%20short%20blog%20post%20about%20investing%20in%20real%20estate%20in%20the%20philippines'</span>
</code></pre>
<p>The API will return the AI-generated response based on the input message.</p>
<p>In this blog post, we've explored how to create a chat API using Spring AI, OpenAI, and Java. This integration provides a powerful foundation for building AI-powered conversational applications within the Spring ecosystem. By leveraging Spring AI's abstraction layer, you can easily switch between different AI providers or models in the future, making your application more flexible and maintainable.</p>
<p>Remember to handle API rate limits, implement proper error handling, and consider adding authentication to your API before deploying it to production.</p>
<p>Happy coding!</p>
<hr />
<p>If you're interested in learning more about developing apps with Generative AI, subscribe to my blog for more tutorials and sample code. You can also follow me in <a target="_blank" href="https://x.com/donvito">Twitter</a> or connect with me in <a target="_blank" href="https://www.linkedin.com/in/melvinvivas/">LinkedIn</a>.</p>
]]></content:encoded></item><item><title><![CDATA[No JavaScript, No Problem! Building Web Apps with HTMX and Go]]></title><description><![CDATA[In this tutorial, we'll dive into using HTMX with Go to create a simple dynamic web application without the need for JavaScript. Our example will focus on a translation service application, demonstrating how to leverage HTMX to make API calls to a Go...]]></description><link>https://blog.donvitocodes.com/no-javascript-no-problem-building-web-apps-with-htmx-and-go</link><guid isPermaLink="true">https://blog.donvitocodes.com/no-javascript-no-problem-building-web-apps-with-htmx-and-go</guid><category><![CDATA[go-htmx]]></category><category><![CDATA[htmx]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[groq]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Fri, 05 Jul 2024 08:17:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1720165776464/f1e04cad-b544-4cd5-b7fe-659dab605f6d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this tutorial, we'll dive into using HTMX with Go to create a simple dynamic web application without the need for JavaScript. Our example will focus on a translation service application, demonstrating how to leverage HTMX to make API calls to a Go backend application.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=d3DrSoz-5no">https://www.youtube.com/watch?v=d3DrSoz-5no</a></div>
<p> </p>
<h2 id="heading-what-is-htmx">What is HTMX?</h2>
<p><a target="_blank" href="https://htmx.org/">HTMX</a> is a modern JavaScript library that extends HTML through attributes, allowing you to define behaviour directly in your html. This can simplify the process of adding interactivity to your web pages, making it an excellent tool for developers looking to enhance their applications with minimal fuss.</p>
<p>Here's a diagram generated by Claude AI to show how HTMX works.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720166817090/25f0ea5e-9f35-44c7-ad1b-af61a73bbb8b.png" alt class="image--center mx-auto" /></p>
<p>While HTMX provides significant benefits in terms of performance, rapid prototyping, and enhanced user experience, it may not be suitable for all projects. Complex, large-scale applications might still benefit from more comprehensive frameworks. However, for many web applications that prioritize simplicity, performance, and rapid development, HTMX offers a great alternative to traditional JavaScript-heavy implementations.</p>
<h2 id="heading-how-to-use-htmx">How to use HTMX</h2>
<p>In this example, we have 3 main files index.html, translate.html and main.go</p>
<h3 id="heading-indexhtml"><strong>index.html</strong></h3>
<p>This file contains the clients side code which has a form which submits to a /translate API endpoint.</p>
<pre><code class="lang-bash">&lt;html lang=<span class="hljs-string">"en"</span>&gt;
    &lt;head&gt;
        &lt;title&gt;HTMX + Go Example&lt;/title&gt;
    &lt;/head&gt;
    &lt;body&gt;
        &lt;script src=<span class="hljs-string">"https://unpkg.com/htmx.org@2.0.0"</span>&gt;&lt;/script&gt;
        &lt;div class=<span class="hljs-string">"container"</span>&gt;
            &lt;h2&gt;HTMX + Go&lt;/h2&gt;
            &lt;p&gt;This simple example calls an LLM using Groq Completion API <span class="hljs-keyword">for</span> the translation.&lt;/p&gt;
        &lt;form hx-post=<span class="hljs-string">"/translate"</span> hx-target=<span class="hljs-string">"#translated-text"</span> hx-swap=<span class="hljs-string">"replace"</span>&gt;
            &lt;label <span class="hljs-keyword">for</span>=<span class="hljs-string">"text-to-translate"</span>&gt;Enter text to translate&lt;/label&gt;&lt;br/&gt;
            &lt;textarea id=<span class="hljs-string">"text-to-translate"</span> name=<span class="hljs-string">"textToTranslate"</span> rows=<span class="hljs-string">"10"</span> cols=<span class="hljs-string">"50"</span>&gt;&lt;/textarea&gt;
            &lt;br/&gt;&lt;br/&gt;
            &lt;label <span class="hljs-keyword">for</span>=<span class="hljs-string">"translate-form"</span>&gt;&lt;/label&gt;
            &lt;select id=<span class="hljs-string">"translate-form"</span> name=<span class="hljs-string">"languageToTranslateTo"</span>&gt;
                &lt;option value=<span class="hljs-string">"english"</span>&gt;English&lt;/option&gt;
                &lt;option value=<span class="hljs-string">"tagalog"</span>&gt;Tagalog&lt;/option&gt;
                &lt;option value=<span class="hljs-string">"french"</span>&gt;French&lt;/option&gt;
                &lt;option value=<span class="hljs-string">"korean"</span>&gt;Korean&lt;/option&gt;
                &lt;option value=<span class="hljs-string">"japanese"</span>&gt;Japanese&lt;/option&gt;
            &lt;/select&gt;&lt;br/&gt;&lt;br/&gt;
            &lt;button <span class="hljs-built_in">type</span>=<span class="hljs-string">"submit"</span>&gt;Submit&lt;/button&gt;
        &lt;/form&gt;
        &lt;div id=<span class="hljs-string">"translated-text"</span>&gt;&lt;/div&gt;
        &lt;/div&gt;
    &lt;/body&gt;
&lt;/html&gt;
</code></pre>
<p><strong>HTMX Integration</strong></p>
<pre><code class="lang-bash">&lt;script src=<span class="hljs-string">"https://unpkg.com/htmx.org@2.0.0"</span>&gt;&lt;/script&gt;
</code></pre>
<p>This line imports the HTMX library from a CDN. HTMX is a library that allows you to access AJAX, CSS Transitions, WebSockets, and Server Sent Events directly in HTML, without writing JavaScript.</p>
<p><strong>The Form</strong></p>
<pre><code class="lang-bash">&lt;form hx-post=<span class="hljs-string">"/translate"</span> hx-target=<span class="hljs-string">"#translated-text"</span> hx-swap=<span class="hljs-string">"replace"</span>&gt;
</code></pre>
<p>This form uses HTMX attributes:</p>
<ul>
<li><p><code>hx-post="/translate"</code>: When submitted, it will make a POST request to the "/translate" endpoint.</p>
</li>
<li><p><code>hx-target="#translated-text"</code>: The response will be inserted into the element with id "translated-text".</p>
</li>
<li><p><code>hx-swap="replace"</code>: The response will replace the content of the target element.</p>
</li>
</ul>
<h3 id="heading-maingo"><strong>main.go</strong></h3>
<pre><code class="lang-bash">func <span class="hljs-function"><span class="hljs-title">main</span></span>() {

    // Set up the http handlers
    http.HandleFunc(<span class="hljs-string">"/"</span>, handleHome)
    http.HandleFunc(<span class="hljs-string">"/translate"</span>, handleTranslate)

    // Start the server
    err := http.ListenAndServe(<span class="hljs-string">":8080"</span>, nil)
    <span class="hljs-keyword">if</span> err != nil {
       <span class="hljs-built_in">return</span>
    }

}
</code></pre>
<p>This <strong>handleHome()</strong> function will serve index.html. This is the entrypoint of our web server which renders the html form with htmx enabled. This code is ready if you want to load data into htmx. I'll show that in a different example so we don't complicate it right now. For this example, we just want to show how we can submit a form and update the UI dynamically using the response of the API.</p>
<pre><code class="lang-go">
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">handleHome</span><span class="hljs-params">(w http.ResponseWriter, r *http.Request)</span></span> {
    tmpl := template.Must(template.ParseFiles(<span class="hljs-string">"index.html"</span>))

    err := tmpl.Execute(w, <span class="hljs-literal">nil</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span>
    }
}
</code></pre>
<p>The <strong>handleTranslate()</strong> function handles the form submission. It writes translation.html into the response.</p>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">handleTranslate</span><span class="hljs-params">(w http.ResponseWriter, r *http.Request)</span></span> {
    textToTranslate := r.FormValue(<span class="hljs-string">"textToTranslate"</span>)
    languageToTranslateTo := r.FormValue(<span class="hljs-string">"languageToTranslateTo"</span>)
    translatedText := translateText(textToTranslate, languageToTranslateTo)

    <span class="hljs-keyword">type</span> Translation <span class="hljs-keyword">struct</span> {
        TextToTranslate <span class="hljs-keyword">string</span>
        TranslatedText  <span class="hljs-keyword">string</span>
    }

    t := Translation{
        TextToTranslate: textToTranslate,
        TranslatedText:  translatedText,
    }

    tmpl := template.Must(template.ParseFiles(<span class="hljs-string">"translation.html"</span>))
    err := tmpl.Execute(w, t)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span>
    }

}
</code></pre>
<p>Before sending the response back, it adds the translated text into the template.</p>
<pre><code class="lang-go">t := Translation{
    TextToTranslate: textToTranslate,
    TranslatedText:  translatedText,
}
tmpl := template.Must(template.ParseFiles(<span class="hljs-string">"translation.html"</span>))
err := tmpl.Execute(w, t)
</code></pre>
<p><strong>translation.html</strong> looks like this</p>
<pre><code class="lang-go">&lt;div id=<span class="hljs-string">"translated-text"</span> class=<span class="hljs-string">"translated-text"</span>&gt;
    {{.TranslatedText}}
&lt;/div&gt;
</code></pre>
<h2 id="heading-so-what-is-happening"><strong>So what is happening?</strong></h2>
<ul>
<li><p>The form submission calls the endpoint /translate</p>
</li>
<li><p>The Go API processes the request and retrieves the form values <strong>textToTranslate</strong> and <strong>languageToTranslateTo</strong></p>
</li>
<li><p>After retrieving it, the flow calls the translation function using the Groq client to translate the text from the user.</p>
</li>
<li><p>It gets the response and populates the <strong>Translation struct</strong> named <strong>t</strong>.</p>
</li>
<li><p>Then, it populates the <strong>translation.html</strong> template using the struct. Basically <strong>tmpl.Execute()</strong> will get the values from the struct and and update the template. <strong>tmpl.Execute()</strong> will also save the response to <strong>w http.ResponseWriter</strong></p>
</li>
<li><p>The response will then be received by the client and processed by HTMX. HTMX will update the UI.</p>
<ul>
<li><p><code>hx-target="#translated-text"</code>: The response will be inserted into the element with id "translated-text".</p>
</li>
<li><p><code>hx-swap="replace"</code>: The response will replace the content of the target element.</p>
<p>  Here's the reference to the list of supported HTMX attributes you can use</p>
</li>
</ul>
</li>
</ul>
    <div data-node-type="callout">
    <div data-node-type="callout-emoji">💡</div>
    <div data-node-type="callout-text"><a target="_blank" href="https://htmx.org/reference/">https://htmx.org/reference/</a></div>
    </div>


<p>All of these actions, which would traditionally require custom JavaScript, are accomplished using just HTML attributes. This approach can significantly simplify your front-end code, reduce the amount of JavaScript you need to write and maintain, and make your application more accessible and easier to understand.</p>
<p><strong>You can go through the entire example in my Github repo. Don't forget to leave a</strong> ⭐</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/donvito/learngo/tree/master/htmx">https://github.com/donvito/learngo/tree/master/htmx</a></div>
<p> </p>
<hr />
<p>If you're interested in learning more about developing with Go, subscribe to my blog for more tutorials and sample code. You can also follow me in <a target="_blank" href="https://x.com/donvito">Twitter</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Chat with Data from a Website using LangChain and OpenAI]]></title><description><![CDATA[In this tutorial, we will walk through the steps to create a simple Q&A system that retrieves and answers questions based on information from a specific Wikipedia page. This system leverages the power of LangChain, OpenAI's GPT-3.5-turbo, and LangCha...]]></description><link>https://blog.donvitocodes.com/chat-with-data-from-a-website-using-langchain-and-openai</link><guid isPermaLink="true">https://blog.donvitocodes.com/chat-with-data-from-a-website-using-langchain-and-openai</guid><category><![CDATA[generative ai]]></category><category><![CDATA[openai]]></category><category><![CDATA[Python]]></category><category><![CDATA[langchain]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Wed, 03 Jul 2024 12:53:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1720011023610/5d32cf15-8536-41ea-896c-94a51e81e09d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this tutorial, we will walk through the steps to create a simple Q&amp;A system that retrieves and answers questions based on information from a specific Wikipedia page. This system leverages the power of LangChain, OpenAI's GPT-3.5-turbo, and LangChain Hub. By the end of this guide, you'll have a working example of how to integrate these tools to create a practical application.</p>
<h4 id="heading-prerequisites">Prerequisites</h4>
<p>Before we start, ensure you have the following installed:</p>
<ul>
<li><p>Python 3.11 or higher</p>
</li>
<li><p>pip (Python package installer)</p>
</li>
</ul>
<p>You will also need to install the necessary dependencies. Open your terminal and run the following command:</p>
<pre><code class="lang-bash">pip install langchain openai langchain-community chromadb tiktoken langchainhub
</code></pre>
<p>Additionally, you need to set the <code>OPENAI_API_KEY</code> environment variable to authenticate with the OpenAI API. You can do this by running the following command in your terminal:</p>
<pre><code class="lang-python">export OPENAI_API_KEY=<span class="hljs-string">'your-openai-api-key'</span>
</code></pre>
<p>Replace <code>'your-openai-api-key'</code> with your actual OpenAI API key.</p>
<h3 id="heading-step-1-setting-up-the-project">Step 1: Setting Up the Project</h3>
<p>First, let's import the required modules and initialize the document loader.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain <span class="hljs-keyword">import</span> hub
<span class="hljs-keyword">from</span> langchain.chains <span class="hljs-keyword">import</span> RetrievalQA
<span class="hljs-keyword">from</span> langchain_community.chat_models <span class="hljs-keyword">import</span> ChatOpenAI
<span class="hljs-keyword">from</span> langchain_community.document_loaders <span class="hljs-keyword">import</span> WebBaseLoader
<span class="hljs-keyword">from</span> langchain_community.embeddings <span class="hljs-keyword">import</span> OpenAIEmbeddings
<span class="hljs-keyword">from</span> langchain_community.vectorstores <span class="hljs-keyword">import</span> Chroma
<span class="hljs-keyword">from</span> langchain_text_splitters <span class="hljs-keyword">import</span> RecursiveCharacterTextSplitter
</code></pre>
<h3 id="heading-step-2-loading-and-splitting-the-document">Step 2: Loading and Splitting the Document</h3>
<p>We will load a Wikipedia page using <code>WebBaseLoader</code>. For this tutorial, we'll use the Wikipedia page of Korean actress Kim Ji-won. After loading the data, we will split the document into smaller chunks to make it easier for our model to process.</p>
<pre><code class="lang-python"><span class="hljs-comment"># You can add multiple URLs here</span>
loader = WebBaseLoader([<span class="hljs-string">"https://en.wikipedia.org/wiki/Kim_Ji-won_(actress)"</span>])
data = loader.load()

text_splitter = RecursiveCharacterTextSplitter(chunk_size=<span class="hljs-number">500</span>, chunk_overlap=<span class="hljs-number">0</span>)
all_splits = text_splitter.split_documents(data)
</code></pre>
<h3 id="heading-step-3-creating-the-vector-store">Step 3: Creating the Vector Store</h3>
<p>Next, we need to convert the text chunks into embeddings using OpenAI's embeddings model and store these embeddings in a vector store. We'll use <code>Chroma</code> as our vector store.</p>
<pre><code class="lang-python">vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
</code></pre>
<h3 id="heading-step-4-setting-up-the-prompt">Step 4: Setting Up the Prompt</h3>
<p>We'll use a pre-defined prompt from LangChain Hub. Prompts are essential as they guide the model on how to structure its responses.</p>
<pre><code class="lang-python">prompt = hub.pull(<span class="hljs-string">"donvito-codes/rag-prompt"</span>)
</code></pre>
<p><strong>Here is the Prompt configured in LangChain Hub</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1720011607118/906e4076-3feb-4c12-a590-e093d3a23161.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-5-initializing-the-language-model">Step 5: Initializing the Language Model</h3>
<p>We will use OpenAI's GPT-3.5-turbo model for our Q&amp;A system. The <code>temperature</code> parameter controls the randomness of the model's responses. A lower temperature means more deterministic outputs.</p>
<pre><code class="lang-python">llm = ChatOpenAI(model_name=<span class="hljs-string">"gpt-3.5-turbo"</span>, temperature=<span class="hljs-number">0</span>)
</code></pre>
<h3 id="heading-step-6-creating-the-retrievalqa-chain">Step 6: Creating the RetrievalQA Chain</h3>
<p>We will create a <code>RetrievalQA</code> chain using the language model and the vector store retriever. The <code>chain_type_kwargs</code> parameter allows us to pass additional settings to the chain, such as the prompt.</p>
<pre><code class="lang-python">qa_chain = RetrievalQA.from_chain_type(
    llm,
    retriever=vectorstore.as_retriever(),
    chain_type_kwargs={<span class="hljs-string">"prompt"</span>: prompt}
)
</code></pre>
<h3 id="heading-step-7-asking-a-question">Step 7: Asking a Question</h3>
<p>Now, we can ask a question and get an answer from our Q&amp;A system. For this example, we'll ask about Kim Ji-won's latest TV series.</p>
<pre><code class="lang-python">question = <span class="hljs-string">"Which latest TV series did Kim Ji-won star in 2024?"</span>
result = qa_chain({<span class="hljs-string">"query"</span>: question})
print(result[<span class="hljs-string">"result"</span>])
</code></pre>
<h3 id="heading-heres-the-full-working-source-code">Here's the full working source code</h3>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> langchain <span class="hljs-keyword">import</span> hub
<span class="hljs-keyword">from</span> langchain.chains <span class="hljs-keyword">import</span> RetrievalQA
<span class="hljs-keyword">from</span> langchain_community.chat_models <span class="hljs-keyword">import</span> ChatOpenAI
<span class="hljs-keyword">from</span> langchain_community.document_loaders <span class="hljs-keyword">import</span> WebBaseLoader
<span class="hljs-keyword">from</span> langchain_community.embeddings <span class="hljs-keyword">import</span> OpenAIEmbeddings
<span class="hljs-keyword">from</span> langchain_community.vectorstores <span class="hljs-keyword">import</span> Chroma
<span class="hljs-keyword">from</span> langchain_text_splitters <span class="hljs-keyword">import</span> RecursiveCharacterTextSplitter

loader = WebBaseLoader([<span class="hljs-string">"https://en.wikipedia.org/wiki/Kim_Ji-won_(actress)"</span>])
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=<span class="hljs-number">500</span>, chunk_overlap=<span class="hljs-number">0</span>)
all_splits = text_splitter.split_documents(data)

vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())

prompt = hub.pull(<span class="hljs-string">"donvito-codes/rag-prompt"</span>)

llm = ChatOpenAI(model_name=<span class="hljs-string">"gpt-3.5-turbo"</span>, temperature=<span class="hljs-number">0</span>)

<span class="hljs-comment"># RetrievalQA</span>
qa_chain = RetrievalQA.from_chain_type(
    llm,
    retriever=vectorstore.as_retriever(),
    chain_type_kwargs={<span class="hljs-string">"prompt"</span>: prompt}
)

question = <span class="hljs-string">"which latest tv series did Kim Ji Won star on?"</span>
result = qa_chain({<span class="hljs-string">"query"</span>: question})
print(result[<span class="hljs-string">"result"</span>])
</code></pre>
<h3 id="heading-save-the-file-as-mainpy-and-run-it"><strong>Save the file as main.py and run it</strong></h3>
<pre><code class="lang-bash">python main.py
</code></pre>
<p>You've now created a basic Q&amp;A system that pulls data from a Wikipedia page, processes it into embeddings, and uses a language model to retrieve and answer questions. This is a powerful example of how you can leverage LangChain and OpenAI to create intelligent applications.</p>
<p>Feel free to experiment with different Wikipedia pages, questions, and prompts to see how the system performs. The flexibility of LangChain and OpenAI allows for a wide range of applications beyond just Q&amp;A systems. Happy coding!</p>
<h4 id="heading-additional-resources">Additional Resources</h4>
<ul>
<li><p><a target="_blank" href="https://docs.langchain.com/">LangChain Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://beta.openai.com/docs/">OpenAI API Reference</a></p>
</li>
<li><p><a target="_blank" href="https://www.langchain.com/langsmith">LangChain Hub</a></p>
</li>
</ul>
<hr />
<p>If you're interested in learning more about developing with Generative AI, subscribe to my blog for more tutorials and sample code.</p>
]]></content:encoded></item><item><title><![CDATA[LaunchStack: My NextJS Learning Journey]]></title><description><![CDATA[Hey there, fellow software engineers and aspiring startup founders! I'm excited to share a project I'm just starting: LaunchStack. But here's the twist – I'm not an expert. In fact, I'm using this project to learn NextJS myself!
What is LaunchStack?
...]]></description><link>https://blog.donvitocodes.com/launchstack-my-nextjs-learning-journey</link><guid isPermaLink="true">https://blog.donvitocodes.com/launchstack-my-nextjs-learning-journey</guid><category><![CDATA[Next.js]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[Startups]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Wed, 03 Jul 2024 05:12:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719981609285/e5a1ee31-c7c8-4810-99ac-1b70a0f987e5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey there, fellow software engineers and aspiring startup founders! I'm excited to share a project I'm just starting: LaunchStack. But here's the twist – I'm not an expert. In fact, I'm using this project to learn NextJS myself!</p>
<h2 id="heading-what-is-launchstack">What is LaunchStack?</h2>
<p>LaunchStack is going to be a NextJS starter kit that I hope will eventually have everything needed to build a startup or website with dynamic features quickly. But right now, it's more of a dream and a learning goal than a finished product.</p>
<h2 id="heading-why-am-i-creating-launchstack">Why am I creating LaunchStack?</h2>
<ol>
<li><p><strong>Learning NextJS</strong>: I'll be honest – I'm not proficient in NextJS yet. LaunchStack is my way of diving deep into this powerful framework. I'm learning as I go, and I'm excited to share this journey with you!</p>
</li>
<li><p><strong>Building a Useful Tool</strong>: As I learn, I want to create something that could eventually help me (and maybe you!) build web projects more efficiently.</p>
</li>
<li><p><strong>Challenging Myself</strong>: This is a pretty ambitious project for a NextJS beginner like me. But I believe in learning by doing, even if it means tackling something that seems a bit out of reach right now.</p>
</li>
<li><p><strong>Sharing the Journey</strong>: By documenting this process, I hope to inspire other developers who are also at the beginning of their journey with new technologies.</p>
</li>
</ol>
<p>Since I left my job, I have more time to dedicate to this project. My goal is to become proficient with NextJS by the end of it. I estimate it may take me around 6 months to a year to feel comfortable and confident working with NextJS, but I'm excited about the journey and the learning process.</p>
<h2 id="heading-what-do-i-hope-launchstack-will-include">What do I hope LaunchStack will include?</h2>
<p>As I learn and grow, I'm aiming to incorporate these features and identify the technology to incorporate for each. For now, I just listed what I know based on my research.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719981769054/beee2ef6-ce44-4fb7-b8c4-c04ea59f28c8.png" alt class="image--center mx-auto" /></p>
<p>| <strong>Feature</strong> | <strong>Technologies</strong><br /><strong>(subject to change based on research)</strong> |
| --- | --- |
| <strong>Auth</strong> | AuthJS |
| <strong>API Security / Rate Limiting</strong> | Unkey, Zuplo |
| <strong>Theming</strong> | Tailwind, DaisyUI, Shadcn |
| <strong>Payments / Subscriptions</strong> | Stripe, LemonSqueezy, RevenueCat |
| <strong>Forms</strong> | Netlify, HubSpot Forms, Formspree |
| <strong>Email Sending</strong> | Mailgun |
| <strong>File Storage</strong> | Supabase, S3 |
| <strong>ORM</strong> | Drizzle |
| <strong>Database</strong> | PocketBase, SQLite<br />Postgres (Vercel, Supabase)<br />Redis (Vercel KV) |
| <strong>Analytics</strong> | Google Analytics, Posthog, Plausible |
| <strong>Deployment</strong> | Vercel, Netlify |
| <strong>AI</strong> | Vercel AI SDK |</p>
<h2 id="heading-a-call-for-support-and-patience">A Call for Support and Patience</h2>
<p>This project is ambitious, especially given my current skill level with NextJS. There will be struggles, mistakes, and probably moments of frustration. But I'm committed to learning and growing through this process.</p>
<p>If you're an experienced NextJS developer, I'd love your advice and insights. If you're a beginner like me, maybe we can learn together! And if you're just curious about the process, I invite you to follow along on this journey.</p>
<p>For now, LaunchStack will not be open-sourced until it is decent enough to show the world. This is something I have wanted to do for a long time, inspired by projects like ShipFast, LaunchFast and DivJoy. As a beginner, it's hard to start building all these features from scratch, but I'm determined to make it happen.</p>
<p>I can't promise that LaunchStack will be ready for use anytime soon. What I can promise is an honest, open look at what it's like to tackle a big project as a way of learning a new technology.</p>
<p>Stay tuned for updates , lessons learned, and hopefully, steady progress towards a useful tool for all of us aspiring startup founders! You can subscribe here or follow me in <a target="_blank" href="http://x.com/donvito">Twitter</a>, see you! I go online in <a target="_blank" href="https://youtube.com/donvitocodes">Youtube</a> or <a target="_blank" href="https://twitch.tv/donvitocodes">Twitch</a> when I feel like doing live coding.</p>
<p>Let's learn, code, and grow together!</p>
]]></content:encoded></item><item><title><![CDATA[Integrating Generative AI with Real-Time Data from APIs - Groq, Python and Go]]></title><description><![CDATA[In the world of conversational AI, the ability to integrate with external systems and retrieve information in real-time is a game-changer. Imagine being able to tap into the power of custom API calls, REST APIs, and other external systems to provide ...]]></description><link>https://blog.donvitocodes.com/integrating-generative-ai-with-real-time-data-from-apis-groq-python-and-go</link><guid isPermaLink="true">https://blog.donvitocodes.com/integrating-generative-ai-with-real-time-data-from-apis-groq-python-and-go</guid><category><![CDATA[Go Language]]></category><category><![CDATA[Python]]></category><category><![CDATA[Python 3]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[APIs]]></category><category><![CDATA[groq]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Tue, 25 Jun 2024 08:08:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719301975439/36cad366-842a-4ec1-8baa-babc8ba9390d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of conversational AI, the ability to integrate with external systems and retrieve information in real-time is a game-changer. Imagine being able to tap into the power of custom API calls, REST APIs, and other external systems to provide accurate and up-to-date responses to user questions.</p>
<p>This is precisely what Groq's function calling feature offers, allowing developers to harness the power of NLP processing to understand which function to call and retrieve the necessary information.</p>
<p>In this blog post, we'll explore how to use Groq's function calling feature to build conversational AI models that can interact with external systems, and demonstrate its capabilities using a simple example. We'll use an HTTP API written in Go to simulate an external API call which, ideally, will retrieve real data from a database. We'll break down the code into smaller parts, explaining each section in detail.</p>
<h2 id="heading-architecture-of-what-well-build"><strong>Architecture of what we'll build</strong></h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719301277442/ab3b66ac-dead-4180-a75f-80410237a1dd.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-mock-api-in-go"><strong>Mock API in Go</strong></h2>
<p>Here is the code, just copy paste it into a <strong>main.go</strong> file</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"encoding/json"</span>
    <span class="hljs-string">"net/http"</span>
    <span class="hljs-string">"strings"</span>
)

<span class="hljs-keyword">type</span> CondoInfo <span class="hljs-keyword">struct</span> {
    Price    <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"price"`</span>
    Location <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"location"`</span>
}

<span class="hljs-keyword">var</span> condos = <span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]CondoInfo{
    <span class="hljs-string">"arton"</span>: {
        Price:    <span class="hljs-string">"Php 7,000,000"</span>,
        Location: <span class="hljs-string">"Katipunan, Quezon City"</span>,
    },
    <span class="hljs-string">"gold residences"</span>: {
        Price:    <span class="hljs-string">"Php 8,000,000"</span>,
        Location: <span class="hljs-string">"Near NAIA Terminal 1, Paranaque City"</span>,
    },
    <span class="hljs-string">"aruga mactan"</span>: {
        Price:    <span class="hljs-string">"Php 9,000,000"</span>,
        Location: <span class="hljs-string">"Mactan, Cebu"</span>,
    },
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">getCondoLocation</span><span class="hljs-params">(w http.ResponseWriter, r *http.Request)</span></span> {
    condoName := strings.ToLower(r.URL.Query().Get(<span class="hljs-string">"condo_name"</span>))
    condo, exists := condos[condoName]
    <span class="hljs-keyword">if</span> !exists {
        http.Error(w, <span class="hljs-string">"Condo not found"</span>, http.StatusNotFound)
        <span class="hljs-keyword">return</span>
    }

    response := <span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">string</span>{<span class="hljs-string">"location"</span>: condo.Location}
    err := json.NewEncoder(w).Encode(response)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span>
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">getCondoPrice</span><span class="hljs-params">(w http.ResponseWriter, r *http.Request)</span></span> {
    condoName := strings.ToLower(r.URL.Query().Get(<span class="hljs-string">"condo_name"</span>))
    condo, exists := condos[condoName]
    <span class="hljs-keyword">if</span> !exists {
        http.Error(w, <span class="hljs-string">"Condo not found"</span>, http.StatusNotFound)
        <span class="hljs-keyword">return</span>
    }

    response := <span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">string</span>{<span class="hljs-string">"price"</span>: condo.Price}
    err := json.NewEncoder(w).Encode(response)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span>
    }
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    http.HandleFunc(<span class="hljs-string">"/get_condo_location"</span>, getCondoLocation)
    http.HandleFunc(<span class="hljs-string">"/get_condo_price"</span>, getCondoPrice)

    err := http.ListenAndServe(<span class="hljs-string">":8080"</span>, <span class="hljs-literal">nil</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span>
    }
}
</code></pre>
<p>Run the API using the go command. This will run the API in http://localhost:8000</p>
<pre><code class="lang-bash">go run main.go
</code></pre>
<h3 id="heading-the-api-implements-the-following">The API implements the following</h3>
<p><code>get_condo_location</code> endpoint:</p>
<pre><code class="lang-bash">curl <span class="hljs-string">"http://localhost:8080/get_condo_location?condo_name=Arton"</span>

{<span class="hljs-string">"location"</span>:<span class="hljs-string">"Katipunan, Quezon City"</span>}
</code></pre>
<p><code>get_condo_price</code> endpoint:</p>
<pre><code class="lang-bash">curl <span class="hljs-string">"http://localhost:8080/get_condo_price?condo_name=Arton"</span>

{<span class="hljs-string">"price"</span>:<span class="hljs-string">"Php 7,000,000"</span>}
</code></pre>
<h2 id="heading-lets-start-implementing"><strong>Let's start implementing!</strong></h2>
<p>This code is written in Python. This will respond to a query or prompt from a user and answer based on the data retrieved from an API. For this example, we will be using the Go API we created above to get the data to answer the ff. questions:</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">What is the price of Arton?</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Where is the Arton condo?</div>
</div>

<pre><code class="lang-python"><span class="hljs-keyword">from</span> groq <span class="hljs-keyword">import</span> Groq
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> json

client = Groq(api_key=os.getenv(<span class="hljs-string">'GROQ_API_KEY'</span>))
MODEL = <span class="hljs-string">'llama3-70b-8192'</span>
</code></pre>
<p>Here, we import the necessary libraries, including <code>Groq</code>, <code>os</code>, and <code>json</code>. We then set up a <code>Groq</code> client using an API key stored in an environment variable. The <code>MODEL</code> variable specifies the language model we'll be using.</p>
<p>Install the groq dependency using pip install.</p>
<pre><code class="lang-bash">pip install groq
</code></pre>
<h3 id="heading-define-functions-for-the-tool"><strong>Define Functions for the Tool</strong></h3>
<p>Ideally, responses from these functions should be from responses of your APIs. Right now, we will use our API we created earlier for demonstration purposes. Imagine that this function retrieves the price from a REST API. This is powerful because you can take advantage of NLP processing to understand which function to call. This is also available in OpenAI and this functionality works similar to it. But for this example, we will harness the power of Groq's fast inference!</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_condo_price</span>(<span class="hljs-params">condo_name</span>):</span>
    <span class="hljs-string">"""Get the price of a condominium"""</span>
    <span class="hljs-keyword">return</span> condo_price_api_call(condo_name)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_condo_location</span>(<span class="hljs-params">condo_name</span>):</span>
    <span class="hljs-string">"""Get the location of a condominium"""</span>
    <span class="hljs-keyword">return</span> condo_location_api_call(condo_name)
</code></pre>
<p><strong>Function 1.</strong><code>get_condo_price</code> that takes a condominium name as input and returns the price of the condominium.</p>
<p><strong>Function 2.</strong><code>get_condo_location</code> that takes a condominium name as input and returns the location of the condominium.</p>
<h3 id="heading-api-calls-to-be-used-by-the-functions"><strong>API Calls to be used by the Functions</strong></h3>
<p>Then, we'll define the functions to call the external API which in this case is the API we created initially with Go. Feel free to change this to call your own APIs.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">condo_price_api_call</span>(<span class="hljs-params">condo_name</span>):</span>
    url = <span class="hljs-string">f'http://localhost:8080/get_condo_price?condo_name=<span class="hljs-subst">{condo_name}</span>'</span>
    response = requests.get(url)
    <span class="hljs-keyword">if</span> response.status_code == <span class="hljs-number">200</span>:
        data = response.json()
        <span class="hljs-keyword">return</span> data.get(<span class="hljs-string">'price'</span>)
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>  <span class="hljs-comment"># Handle error cases here if needed</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">condo_location_api_call</span>(<span class="hljs-params">condo_name</span>):</span>
    url = <span class="hljs-string">f'http://localhost:8080/get_condo_location?condo_name=<span class="hljs-subst">{condo_name}</span>'</span>
    response = requests.get(url)
    <span class="hljs-keyword">if</span> response.status_code == <span class="hljs-number">200</span>:
        data = response.json()
        <span class="hljs-keyword">return</span> data.get(<span class="hljs-string">'location'</span>)
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>
</code></pre>
<h2 id="heading-running-the-conversation"><strong>Running the Conversation</strong></h2>
<pre><code class="lang-bash">def run_conversation(user_prompt):
    ...
</code></pre>
<p>The <code>run_conversation</code> function takes a user prompt as input and sends it to the Groq model along with the available functions (<code>get_condo_price</code> and <code>get_condo_location</code>).</p>
<h3 id="heading-step-1-send-the-conversation-and-available-functions-to-the-model"><strong>Step 1: Send the Conversation and Available Functions to the Model</strong></h3>
<pre><code class="lang-bash">    messages=[
        {
            <span class="hljs-string">"role"</span>: <span class="hljs-string">"system"</span>,
            <span class="hljs-string">"content"</span>: <span class="hljs-string">"You are a function calling LLM that uses the data extracted from the function and responds to "</span>
                       <span class="hljs-string">"the user with the result of the function."</span>
        },
        {
            <span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>,
            <span class="hljs-string">"content"</span>: user_prompt,
        }
    ]
    tools = [
        {
            <span class="hljs-string">"type"</span>: <span class="hljs-string">"function"</span>,
            <span class="hljs-string">"function"</span>: {
                <span class="hljs-string">"name"</span>: <span class="hljs-string">"get_condo_price"</span>,
                <span class="hljs-string">"description"</span>: <span class="hljs-string">"Get the price of a condo or condominium name"</span>,
                <span class="hljs-string">"parameters"</span>: {
                    <span class="hljs-string">"type"</span>: <span class="hljs-string">"object"</span>,
                    <span class="hljs-string">"properties"</span>: {
                        <span class="hljs-string">"condo_name"</span>: {
                            <span class="hljs-string">"type"</span>: <span class="hljs-string">"string"</span>,
                            <span class="hljs-string">"description"</span>: <span class="hljs-string">"The name of the condominium or condo"</span>,
                        }
                    },
                    <span class="hljs-string">"required"</span>: [<span class="hljs-string">"condo_name"</span>],
                },
            },
        },
         {
            <span class="hljs-string">"type"</span>: <span class="hljs-string">"function"</span>,
            <span class="hljs-string">"function"</span>: {
                <span class="hljs-string">"name"</span>: <span class="hljs-string">"get_condo_location"</span>,
                <span class="hljs-string">"description"</span>: <span class="hljs-string">"Get the location of a condo or condominium name"</span>,
                <span class="hljs-string">"parameters"</span>: {
                    <span class="hljs-string">"type"</span>: <span class="hljs-string">"object"</span>,
                    <span class="hljs-string">"properties"</span>: {
                        <span class="hljs-string">"condo_name"</span>: {
                            <span class="hljs-string">"type"</span>: <span class="hljs-string">"string"</span>,
                            <span class="hljs-string">"description"</span>: <span class="hljs-string">"The name of the condominium or condo"</span>,
                        }
                    },
                    <span class="hljs-string">"required"</span>: [<span class="hljs-string">"condo_name"</span>],
                },
            },
        }
    ]
</code></pre>
<p>We define the conversation and available functions to send to the model.</p>
<h3 id="heading-step-2-get-the-models-response"><strong>Step 2: Get the Model's Response</strong></h3>
<pre><code class="lang-bash">    response = client.chat.completions.create(
        model=MODEL,
        messages=messages,
        tools=tools,
        tool_choice=<span class="hljs-string">"auto"</span>,
        max_tokens=4096
    )
</code></pre>
<p>We get the model's response to the conversation and available functions.</p>
<p>This is how the <strong>response</strong> object looks like.</p>
<pre><code class="lang-python">
ChatCompletionMessage(
    content=<span class="hljs-literal">None</span>,
    role=<span class="hljs-string">'assistant'</span>,
    function_call=<span class="hljs-literal">None</span>,
    tool_calls=[
        ChatCompletionMessageToolCall(
            id=<span class="hljs-string">'call_2nvf'</span>,
            function=Function(
                arguments=<span class="hljs-string">'{"condo_name":"Arton"}'</span>,
                name=<span class="hljs-string">'get_condo_price'</span>
            ),
            type=<span class="hljs-string">'function'</span>
        )
    ]
)
</code></pre>
<h3 id="heading-step-3-check-if-the-model-wanted-to-call-a-function"><strong>Step 3: Check if the Model Wanted to Call a Function</strong></h3>
<pre><code class="lang-bash">    response_message = response.choices[0].message
    tool_calls = response_message.tool_calls

    <span class="hljs-keyword">if</span> tool_calls: <span class="hljs-comment">#checks if tool_calls is an object</span>
        ...
</code></pre>
<p>We check if the model wanted to call a function.</p>
<h3 id="heading-step-4-call-the-function"><strong>Step 4: Call the Function</strong></h3>
<pre><code class="lang-bash">        available_functions = {
            <span class="hljs-string">"get_condo_price"</span>: get_condo_price,
            <span class="hljs-string">"get_condo_location"</span>: get_condo_location,
        }
        messages.append(response_message)  <span class="hljs-comment"># extend conversation with assistant's reply</span>
        <span class="hljs-keyword">for</span> tool_call <span class="hljs-keyword">in</span> tool_calls: <span class="hljs-comment">#loop through tool_calls and get the function to call</span>
            function_name = tool_call.function.name
            function_to_call = available_functions[function_name]
            function_args = json.loads(tool_call.function.arguments)
            function_response = function_to_call(
                condo_name=function_args.get(<span class="hljs-string">"condo_name"</span>)
            )
            messages.append(
                {
                    <span class="hljs-string">"tool_call_id"</span>: tool_call.id,
                    <span class="hljs-string">"role"</span>: <span class="hljs-string">"tool"</span>,
                    <span class="hljs-string">"name"</span>: function_name,
                    <span class="hljs-string">"content"</span>: function_response,
                }
            )  <span class="hljs-comment"># extend conversation with function response</span>
</code></pre>
<p>We call the function and append the function response to the conversation.</p>
<h3 id="heading-step-5-get-a-new-response-from-the-model"><strong>Step 5: Get a New Response from the Model</strong></h3>
<pre><code class="lang-bash">        second_response = client.chat.completions.create(
            model=MODEL,
            messages=messages
        )  <span class="hljs-comment"># get a new response from the model where it can see the function response</span>
        <span class="hljs-built_in">return</span> second_response.choices[0].message.content
</code></pre>
<p>We get a new response from the model where it can see the function response.</p>
<h3 id="heading-step-6-testing-the-conversation"><strong>Step 6: Testing the Conversation</strong></h3>
<pre><code class="lang-bash">prompt = <span class="hljs-string">"What is the price of Arton?"</span>
<span class="hljs-built_in">print</span>(<span class="hljs-string">'Price:'</span>, run_conversation(prompt))

prompt = <span class="hljs-string">"Where is the Arton condo?"</span>
<span class="hljs-built_in">print</span>(<span class="hljs-string">'Location:'</span>, run_conversation(prompt))
</code></pre>
<p>We test the conversation by providing two user prompts:<br />"What is the price of Arton?" and "Where is the Arton condo?"</p>
<h3 id="heading-step-7-running-the-python-app">Step 7: Running the Python app</h3>
<pre><code class="lang-bash">python main.py
</code></pre>
<p>You should see this response in your terminal.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719295795456/241b7c5f-bf3d-46cd-93f7-470e62b3fe00.png" alt class="image--center mx-auto" /></p>
<p>Full Source Code</p>
<p>Here's is the full source code of the example. Just save it in a <strong>main.py</strong> file.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> groq <span class="hljs-keyword">import</span> Groq
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> requests

client = Groq(api_key=os.getenv(<span class="hljs-string">'GROQ_API_KEY'</span>))
MODEL = <span class="hljs-string">'llama3-70b-8192'</span>


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_condo_price</span>(<span class="hljs-params">condo_name</span>):</span>
    <span class="hljs-string">"""Get the price of a condominium"""</span>
    <span class="hljs-keyword">return</span> condo_price_api_call(condo_name)


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">condo_price_api_call</span>(<span class="hljs-params">condo_name</span>):</span>
    url = <span class="hljs-string">f'http://localhost:8080/get_condo_price?condo_name=<span class="hljs-subst">{condo_name}</span>'</span>
    response = requests.get(url)
    <span class="hljs-keyword">if</span> response.status_code == <span class="hljs-number">200</span>:
        data = response.json()
        <span class="hljs-keyword">return</span> data.get(<span class="hljs-string">'price'</span>)
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>  <span class="hljs-comment"># Handle error cases here if needed</span>


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">get_condo_location</span>(<span class="hljs-params">condo_name</span>):</span>
    <span class="hljs-string">"""Get the location of a condominium"""</span>
    <span class="hljs-keyword">return</span> condo_location_api_call(condo_name)


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">condo_location_api_call</span>(<span class="hljs-params">condo_name</span>):</span>
    url = <span class="hljs-string">f'http://localhost:8080/get_condo_location?condo_name=<span class="hljs-subst">{condo_name}</span>'</span>
    response = requests.get(url)
    <span class="hljs-keyword">if</span> response.status_code == <span class="hljs-number">200</span>:
        data = response.json()
        <span class="hljs-keyword">return</span> data.get(<span class="hljs-string">'location'</span>)
    <span class="hljs-keyword">else</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>


<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">run_conversation</span>(<span class="hljs-params">user_prompt</span>):</span>
    <span class="hljs-comment"># Step 1: send the conversation and available functions to the model</span>
    messages = [
        {
            <span class="hljs-string">"role"</span>: <span class="hljs-string">"system"</span>,
            <span class="hljs-string">"content"</span>: <span class="hljs-string">"You are a function calling LLM that uses the data extracted from the function and responds to "</span>
                       <span class="hljs-string">"the user with the result of the function. Do not mention about anything about the tool call."</span>
                       <span class="hljs-string">"Just respond with the answer to the user prompt."</span>
        },
        {
            <span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>,
            <span class="hljs-string">"content"</span>: user_prompt,
        }
    ]
    tools = [
        {
            <span class="hljs-string">"type"</span>: <span class="hljs-string">"function"</span>,
            <span class="hljs-string">"function"</span>: {
                <span class="hljs-string">"name"</span>: <span class="hljs-string">"get_condo_price"</span>,
                <span class="hljs-string">"description"</span>: <span class="hljs-string">"Get the price of a condo or condominium name"</span>,
                <span class="hljs-string">"parameters"</span>: {
                    <span class="hljs-string">"type"</span>: <span class="hljs-string">"object"</span>,
                    <span class="hljs-string">"properties"</span>: {
                        <span class="hljs-string">"condo_name"</span>: {
                            <span class="hljs-string">"type"</span>: <span class="hljs-string">"string"</span>,
                            <span class="hljs-string">"description"</span>: <span class="hljs-string">"The name of the condominium or condo"</span>,
                        }
                    },
                    <span class="hljs-string">"required"</span>: [<span class="hljs-string">"condo_name"</span>],
                },
            },
        },
        {
            <span class="hljs-string">"type"</span>: <span class="hljs-string">"function"</span>,
            <span class="hljs-string">"function"</span>: {
                <span class="hljs-string">"name"</span>: <span class="hljs-string">"get_condo_location"</span>,
                <span class="hljs-string">"description"</span>: <span class="hljs-string">"Get the location of a condo or condominium name"</span>,
                <span class="hljs-string">"parameters"</span>: {
                    <span class="hljs-string">"type"</span>: <span class="hljs-string">"object"</span>,
                    <span class="hljs-string">"properties"</span>: {
                        <span class="hljs-string">"condo_name"</span>: {
                            <span class="hljs-string">"type"</span>: <span class="hljs-string">"string"</span>,
                            <span class="hljs-string">"description"</span>: <span class="hljs-string">"The name of the condominium or condo"</span>,
                        }
                    },
                    <span class="hljs-string">"required"</span>: [<span class="hljs-string">"condo_name"</span>],
                },
            },
        }
    ]
    response = client.chat.completions.create(
        model=MODEL,
        messages=messages,
        tools=tools,
        tool_choice=<span class="hljs-string">"auto"</span>,
        max_tokens=<span class="hljs-number">4096</span>
    )

    response_message = response.choices[<span class="hljs-number">0</span>].message
    tool_calls = response_message.tool_calls

    <span class="hljs-comment"># Step 2: check if the model wanted to call a function</span>
    <span class="hljs-keyword">if</span> tool_calls:
        <span class="hljs-comment"># Step 3: call the function</span>
        <span class="hljs-comment"># Note: the JSON response may not always be valid; be sure to handle errors</span>
        available_functions = {
            <span class="hljs-string">"get_condo_price"</span>: get_condo_price,
            <span class="hljs-string">"get_condo_location"</span>: get_condo_location,
        }  <span class="hljs-comment"># only one function in this example, but you can have multiple</span>
        messages.append(response_message)  <span class="hljs-comment"># extend conversation with assistant's reply</span>
        <span class="hljs-comment"># Step 4: send the info for each function call and function response to the model</span>
        <span class="hljs-keyword">for</span> tool_call <span class="hljs-keyword">in</span> tool_calls:
            function_name = tool_call.function.name
            function_to_call = available_functions[function_name]
            function_args = json.loads(tool_call.function.arguments)
            function_response = function_to_call(
                condo_name=function_args.get(<span class="hljs-string">"condo_name"</span>)
            )
            messages.append(
                {
                    <span class="hljs-string">"tool_call_id"</span>: tool_call.id,
                    <span class="hljs-string">"role"</span>: <span class="hljs-string">"tool"</span>,
                    <span class="hljs-string">"name"</span>: function_name,
                    <span class="hljs-string">"content"</span>: function_response,
                }
            )  <span class="hljs-comment"># extend conversation with function response</span>
        second_response = client.chat.completions.create(
            model=MODEL,
            messages=messages
        )  <span class="hljs-comment"># get a new response from the model where it can see the function response</span>
        <span class="hljs-keyword">return</span> second_response.choices[<span class="hljs-number">0</span>].message.content


prompt = <span class="hljs-string">"What is the price of Arton?"</span>
print(<span class="hljs-string">'Price:'</span>, run_conversation(prompt))

prompt = <span class="hljs-string">"Where is the Arton condo?"</span>
print(<span class="hljs-string">'Location:'</span>, run_conversation(prompt))
</code></pre>
<p><strong>Summary</strong></p>
<p>In this blog post, we created an HTTP API using Go to simulate retrieval of data which was used to answer user questions or queries.</p>
<p>We explored the function calling feature of the Groq APIs, which allows us to call external functions to retrieve information. We defined two functions, <code>get_condo_price</code> and <code>get_condo_location</code>, and used the <code>run_conversation</code> function to send the conversation and available functions to the model.</p>
<p>We then checked if the model wanted to call a function, called the function, and appended the function response to the conversation. Finally, we tested the conversation with two user prompts. The function calling feature of the Groq APIs provides a powerful way to build conversational AI models that can interact with external systems.</p>
<hr />
<p>If you're interested in learning more about integrating Go with Generative AI, follow my blog for more tutorials and insights. This is just the start!</p>
<p>I do live coding in <a target="_blank" href="https://twitch.tv/donvitocodes"><strong>Twitch</strong></a> and <a target="_blank" href="https://youtube.com/donvitocodes"><strong>Youtube</strong></a>. You can follow me if you'd like to ask me questions when I go live. I also post in <a target="_blank" href="https://www.linkedin.com/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&amp;followMember=melvinvivas"><strong>LinkedIn</strong></a>, you can connect with me there as well.</p>
<p>Interested in implementing AI within your company, you can reach out to me.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://buymeacoffee.com/donvitocodes/e/260379">https://buymeacoffee.com/donvitocodes/e/260379</a></div>
<p> </p>
<p>Not yet sure how can I help? Book a FREE discovery call with me.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://buymeacoffee.com/donvitocodes/e/260160">https://buymeacoffee.com/donvitocodes/e/260160</a></div>
]]></content:encoded></item><item><title><![CDATA[Using Go to Translate Text using the Groq API (Fast Inference)]]></title><description><![CDATA[In this blog post, I'll walk you through the process of using Go to translate text using the Groq's Fast Inference API.
What is Groq?
Groq is a new company that's making AI work much faster than ever before. They've created special computer parts cal...]]></description><link>https://blog.donvitocodes.com/using-go-to-translate-text-using-the-groq-api-fast-inference</link><guid isPermaLink="true">https://blog.donvitocodes.com/using-go-to-translate-text-using-the-groq-api-fast-inference</guid><category><![CDATA[Go Language]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[DonvitoCodes]]></dc:creator><pubDate>Mon, 24 Jun 2024 09:50:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719221069785/bcd5a9a4-231d-492b-9d9f-283c217b8fc0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog post, I'll walk you through the process of using Go to translate text using the Groq's Fast Inference API.</p>
<h2 id="heading-what-is-groq">What is Groq?</h2>
<p><a target="_blank" href="https://wow.groq.com/why-groq/">Groq</a> is a new company that's making AI work much faster than ever before. They've created special computer parts called Language Processing Units (LPUs) that are really good at understanding and creating language. Imagine being able to generate so many words in a second - that's how fast Groq's technology is! It's way quicker than the computers we usually use for AI.</p>
<h3 id="heading-groq-vs-chatgpt">Groq vs ChatGPT</h3>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/8UzW_AGX68g?si=9bZsn7lxxH88XwWy&amp;t=16">https://youtu.be/8UzW_AGX68g?si=9bZsn7lxxH88XwWy&amp;t=16</a></div>
<p> </p>
<p>This super fast speed is great for lots of things. It can help make:</p>
<ul>
<li><p>Translation apps that work right away</p>
</li>
<li><p>Chatbots that talk to you without waiting</p>
</li>
<li><p>Trading programs that make decisions super fast</p>
</li>
</ul>
<p>The best part? Groq made their fast APIs available for anyone to use.</p>
<p>Groq supports different open-source LLM models so it is a great alternative to OpenAI's GPT models.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719222125704/1049a63f-6cd1-4903-af59-43eb4442dc0d.png" alt class="image--center mx-auto" /></p>
<p>By the end of this tutorial, you'll have a clear understanding of how to set up your environment, make Groq API requests, and handle the responses to translate text. Let's dive in!</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we start, ensure you have the following:</p>
<ul>
<li><p>A basic understanding of the Go programming language.</p>
</li>
<li><p>Go installed on your machine. I am using the latest version Go 1.22.</p>
</li>
<li><p>An API key from the Groq Fast Inference API.</p>
</li>
</ul>
<h2 id="heading-getting-your-groq-api-key">Getting Your Groq API Key</h2>
<p>To interact with the Groq Fast Inference API, you'll need an API key. Follow these steps to get your API key:</p>
<ol>
<li><p>Create an account in <a target="_blank" href="https://groq.com/">https://groq.com</a>.</p>
</li>
<li><p>Sign up or log in to your Groq account.</p>
</li>
<li><p>Visit <a target="_blank" href="https://console.groq.com/keys">https://console.groq.com/keys</a>. This page shows all your API keys.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719216572148/162ab4a1-bdd2-40d9-be87-137a9df658b2.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Generate a new API key which we need for to access Groq's API. You should see this view after clicking Create API Key. Use a display name that you like.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1719216519123/a9f0c5c1-3a27-4d80-a75f-c38c8e057b9d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Copy and the newly generated API key in a secure place, as you'll need it for the following steps.</p>
</li>
</ol>
<h2 id="heading-setting-up-your-go-environment">Setting Up Your Go Environment</h2>
<p>Create a new Go project and initialize a Go module:</p>
<pre><code class="lang-bash">mkdir go-groq
<span class="hljs-built_in">cd</span> go-groq
go mod init go-groq
</code></pre>
<p>Next, create a file named <code>main.go</code> and add the following code:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"errors"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"os"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {

    apiKey := os.Getenv(<span class="hljs-string">"GROQ_API_KEY"</span>)
    <span class="hljs-keyword">if</span> apiKey == <span class="hljs-string">""</span> {
        err := errors.New(<span class="hljs-string">"GROQ_API_KEY needs to be set as an environment variable"</span>)
        <span class="hljs-built_in">panic</span>(err)
    }

    groqClient := &amp;GroqClient{ApiKey: apiKey}
    textToTranslate := <span class="hljs-string">"Kim Ji-won was born on October 19, 1992, in Geumcheon District, Seoul, South Korea, and has an elder sister who is two years older than her. While still a teenager in 2007, she was scouted on the street and signed with an entertainment agency, she subsequently became a trainee for over three years while preparing for her debut. During her first year of junior high school, she spent six months to a year studying in Chicago, Illinois, United States, where her maternal relatives lived."</span>

    systemPrompt := <span class="hljs-string">"you are a professional language translator. "</span> +
        <span class="hljs-string">"only respond with the translated text. do not explain."</span>
    prompt := fmt.Sprintf(<span class="hljs-string">"translate this text to Tagalog: %s"</span>, textToTranslate)

    translatedText, err := groqClient.ChatCompletion(LLMModelLlama370b, systemPrompt, prompt)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(err)
    }

    <span class="hljs-keyword">if</span> translatedText != <span class="hljs-literal">nil</span> {
        fmt.Println(*translatedText)
    }
}
</code></pre>
<h2 id="heading-implementing-the-groqclient">Implementing the GroqClient</h2>
<p>Next, we'll implement the <code>GroqClient</code> that will handle communication with the Groq Fast Inference API. Create a new file named <code>groq_client.go</code> and add the following code:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"bytes"</span>
    <span class="hljs-string">"encoding/json"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"io"</span>
    <span class="hljs-string">"net/http"</span>
)

<span class="hljs-keyword">const</span> (
    apiBaseUrl = <span class="hljs-string">"https://api.groq.com/openai"</span>
    SYSTEM     = <span class="hljs-string">"system"</span>
    USER       = <span class="hljs-string">"user"</span>

    LLMModelLlama38b       = <span class="hljs-string">"llama3-8b-8192"</span>
    LLMModelLlama370b      = <span class="hljs-string">"llama3-70b-8192"</span>
    LLMModelMixtral8x7b32k = <span class="hljs-string">"mixtral-8x7b-32768"</span>
    LLMModelGemma7b        = <span class="hljs-string">"gemma-7b-it"</span>
)

<span class="hljs-keyword">type</span> GroqClient <span class="hljs-keyword">struct</span> {
    ApiKey <span class="hljs-keyword">string</span>
}

<span class="hljs-keyword">type</span> GroqMessage <span class="hljs-keyword">struct</span> {
    Role    <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"role"`</span>
    Content <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"content"`</span>
}

<span class="hljs-keyword">type</span> ChatCompletionRequest <span class="hljs-keyword">struct</span> {
    Messages    []GroqMessage <span class="hljs-string">`json:"messages"`</span>
    Model       <span class="hljs-keyword">string</span>        <span class="hljs-string">`json:"model"`</span>
    Temperature <span class="hljs-keyword">int</span>           <span class="hljs-string">`json:"temperature"`</span>
    MaxTokens   <span class="hljs-keyword">int</span>           <span class="hljs-string">`json:"max_tokens"`</span>
    TopP        <span class="hljs-keyword">int</span>           <span class="hljs-string">`json:"top_p"`</span>
    Stream      <span class="hljs-keyword">bool</span>          <span class="hljs-string">`json:"stream"`</span>
    Stop        <span class="hljs-keyword">interface</span>{}   <span class="hljs-string">`json:"stop"`</span>
}

<span class="hljs-keyword">type</span> ChatCompletionResponse <span class="hljs-keyword">struct</span> {
    Id      <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"id"`</span>
    Object  <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"object"`</span>
    Created <span class="hljs-keyword">int</span>    <span class="hljs-string">`json:"created"`</span>
    Model   <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"model"`</span>
    Choices []<span class="hljs-keyword">struct</span> {
        Index   <span class="hljs-keyword">int</span> <span class="hljs-string">`json:"index"`</span>
        Message <span class="hljs-keyword">struct</span> {
            Role    <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"role"`</span>
            Content <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"content"`</span>
        } <span class="hljs-string">`json:"message"`</span>
        Logprobs     <span class="hljs-keyword">interface</span>{} <span class="hljs-string">`json:"logprobs"`</span>
        FinishReason <span class="hljs-keyword">string</span>      <span class="hljs-string">`json:"finish_reason"`</span>
    } <span class="hljs-string">`json:"choices"`</span>
    Usage <span class="hljs-keyword">struct</span> {
        PromptTokens     <span class="hljs-keyword">int</span>     <span class="hljs-string">`json:"prompt_tokens"`</span>
        PromptTime       <span class="hljs-keyword">float64</span> <span class="hljs-string">`json:"prompt_time"`</span>
        CompletionTokens <span class="hljs-keyword">int</span>     <span class="hljs-string">`json:"completion_tokens"`</span>
        CompletionTime   <span class="hljs-keyword">float64</span> <span class="hljs-string">`json:"completion_time"`</span>
        TotalTokens      <span class="hljs-keyword">int</span>     <span class="hljs-string">`json:"total_tokens"`</span>
        TotalTime        <span class="hljs-keyword">float64</span> <span class="hljs-string">`json:"total_time"`</span>
    } <span class="hljs-string">`json:"usage"`</span>
    SystemFingerprint <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"system_fingerprint"`</span>
    XGroq             <span class="hljs-keyword">struct</span> {
        Id <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"id"`</span>
    } <span class="hljs-string">`json:"x_groq"`</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(g *GroqClient)</span> <span class="hljs-title">ChatCompletion</span><span class="hljs-params">(llmModel <span class="hljs-keyword">string</span>, systemPrompt <span class="hljs-keyword">string</span>, prompt <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(*<span class="hljs-keyword">string</span>, error)</span></span> {

    llm := llmModel

    <span class="hljs-keyword">if</span> llmModel == <span class="hljs-string">""</span> {
        <span class="hljs-comment">//default to llama8B</span>
        llm = LLMModelLlama38b
    }
    groqMessages := <span class="hljs-built_in">make</span>([]GroqMessage, <span class="hljs-number">0</span>)

    <span class="hljs-keyword">if</span> systemPrompt != <span class="hljs-string">""</span> {
        systemMessage := GroqMessage{
            Role:    SYSTEM,
            Content: systemPrompt,
        }
        groqMessages = <span class="hljs-built_in">append</span>(groqMessages, systemMessage)
    }

    <span class="hljs-keyword">if</span> prompt != <span class="hljs-string">""</span> {
        userMessage := GroqMessage{
            Role:    USER,
            Content: prompt,
        }
        groqMessages = <span class="hljs-built_in">append</span>(groqMessages, userMessage)
    } <span class="hljs-keyword">else</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"prompt is required"</span>)
    }

    chatCompletionRequest := &amp;ChatCompletionRequest{
        Messages:    groqMessages,
        Model:       llm,
        Temperature: <span class="hljs-number">0</span>,
        MaxTokens:   <span class="hljs-number">1024</span>,
        TopP:        <span class="hljs-number">1</span>,
        Stream:      <span class="hljs-literal">false</span>,
        Stop:        <span class="hljs-literal">nil</span>,
    }

    chatCompletionRequestJson, err := json.Marshal(chatCompletionRequest)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-comment">//send http post request</span>
    chatCompletionUrl := <span class="hljs-string">"/v1/chat/completions"</span>
    finalUrl := fmt.Sprintf(<span class="hljs-string">"%s%s"</span>, apiBaseUrl, chatCompletionUrl)

    req, err := http.NewRequest(http.MethodPost, finalUrl, bytes.NewBuffer(chatCompletionRequestJson))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-comment">//set headers</span>
    req.Header.Set(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json"</span>)
    req.Header.Set(<span class="hljs-string">"Authorization"</span>, fmt.Sprintf(<span class="hljs-string">"Bearer %s"</span>, g.ApiKey))

    client := &amp;http.Client{}
    resp, err := client.Do(req)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-keyword">if</span> resp.StatusCode != <span class="hljs-number">200</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"unexpected status code: %d, reason: %s"</span>, resp.StatusCode, resp.Status)
    }

    <span class="hljs-keyword">defer</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(Body io.ReadCloser)</span></span> {
        err = Body.Close()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            fmt.Println(<span class="hljs-string">"Error:"</span>, err)
        }
    }(resp.Body)

    body, err := io.ReadAll(resp.Body)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    chatCompletionResp := &amp;ChatCompletionResponse{}

    err = json.Unmarshal(body, &amp;chatCompletionResp)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-keyword">var</span> content <span class="hljs-keyword">string</span>
    <span class="hljs-keyword">if</span> chatCompletionResp.Choices != <span class="hljs-literal">nil</span> &amp;&amp; <span class="hljs-built_in">len</span>(chatCompletionResp.Choices) &gt; <span class="hljs-number">0</span> {
        content = chatCompletionResp.Choices[<span class="hljs-number">0</span>].Message.Content
    } <span class="hljs-keyword">else</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"no choices"</span>)
    }

    <span class="hljs-keyword">return</span> &amp;content, <span class="hljs-literal">nil</span>
}
</code></pre>
<h3 id="heading-breaking-down-the-groqclient-code">Breaking Down the GroqClient Code</h3>
<p>Let's go through the <code>GroqClient</code> code step by step to understand how it works.</p>
<h4 id="heading-constants-and-structs">Constants and Structs</h4>
<p>First, we define some constants and structs used throughout the code:</p>
<pre><code class="lang-go"><span class="hljs-keyword">const</span> (
    apiBaseUrl = <span class="hljs-string">"https://api.groq.com/openai"</span>
    SYSTEM     = <span class="hljs-string">"system"</span>
    USER       = <span class="hljs-string">"user"</span>

    LLMModelLlama38b       = <span class="hljs-string">"llama3-8b-8192"</span>
    LLMModelLlama370b      = <span class="hljs-string">"llama3-70b-8192"</span>
    LLMModelMixtral8x7b32k = <span class="hljs-string">"mixtral-8x7b-32768"</span>
    LLMModelGemma7b        = <span class="hljs-string">"gemma-7b-it"</span>
)

<span class="hljs-keyword">type</span> GroqClient <span class="hljs-keyword">struct</span> {
    ApiKey <span class="hljs-keyword">string</span>
}

<span class="hljs-keyword">type</span> GroqMessage <span class="hljs-keyword">struct</span> {
    Role    <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"role"`</span>
    Content <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"content"`</span>
}

<span class="hljs-keyword">type</span> ChatCompletionRequest <span class="hljs-keyword">struct</span> {
    Messages    []GroqMessage <span class="hljs-string">`json:"messages"`</span>
    Model       <span class="hljs-keyword">string</span>        <span class="hljs-string">`json:"model"`</span>
    Temperature <span class="hljs-keyword">int</span>           <span class="hljs-string">`json:"temperature"`</span>
    MaxTokens   <span class="hljs-keyword">int</span>           <span class="hljs-string">`json:"max_tokens"`</span>
    TopP        <span class="hljs-keyword">int</span>           <span class="hljs-string">`json:"top_p"`</span>
    Stream      <span class="hljs-keyword">bool</span>          <span class="hljs-string">`json:"stream"`</span>
    Stop        <span class="hljs-keyword">interface</span>{}   <span class="hljs-string">`json:"stop"`</span>
}

<span class="hljs-keyword">type</span> ChatCompletionResponse <span class="hljs-keyword">struct</span> {
    Id      <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"id"`</span>
    Object  <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"object"`</span>
    Created <span class="hljs-keyword">int</span>    <span class="hljs-string">`json:"created"`</span>
    Model   <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"model"`</span>
    Choices []<span class="hljs-keyword">struct</span> {
        Index   <span class="hljs-keyword">int</span> <span class="hljs-string">`json:"index"`</span>
        Message <span class="hljs-keyword">struct</span> {
            Role    <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"role"`</span>
            Content <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"content"`</span>
        } <span class="hljs-string">`json:"message"`</span>
        Logprobs     <span class="hljs-keyword">interface</span>{} <span class="hljs-string">`json:"logprobs"`</span>
        FinishReason <span class="hljs-keyword">string</span>      <span class="hljs-string">`json:"finish_reason"`</span>
    } <span class="hljs-string">`json:"choices"`</span>
    Usage <span class="hljs-keyword">struct</span> {
        PromptTokens     <span class="hljs-keyword">int</span>     <span class="hljs-string">`json:"prompt_tokens"`</span>
        PromptTime       <span class="hljs-keyword">float64</span> <span class="hljs-string">`json:"prompt_time"`</span>
        CompletionTokens <span class="hljs-keyword">int</span>     <span class="hljs-string">`json:"completion_tokens"`</span>
        CompletionTime   <span class="hljs-keyword">float64</span> <span class="hljs-string">`json:"completion_time"`</span>
        TotalTokens      <span class="hljs-keyword">int</span>     <span class="hljs-string">`json:"total_tokens"`</span>
        TotalTime        <span class="hljs-keyword">float64</span> <span class="hljs-string">`json:"total_time"`</span>
    } <span class="hljs-string">`json:"usage"`</span>
    SystemFingerprint <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"system_fingerprint"`</span>
    XGroq             <span class="hljs-keyword">struct</span> {
        Id <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"id"`</span>
    } <span class="hljs-string">`json:"x_groq"`</span>
}
</code></pre>
<p>Here, we define constants for API base URLs, message roles, and model names. The <code>GroqClient</code> struct holds the API key, while <code>GroqMessage</code>, <code>ChatCompletionRequest</code>, and <code>ChatCompletionResponse</code> structs define the request and response formats.</p>
<h4 id="heading-chatcompletion-function">ChatCompletion Function</h4>
<p>Let's break down the <code>ChatCompletion</code> function. This function calls the LLM which will do the translation of the text we send it to. You can use the different models supported by Groq. I have already listed the text-based models in the constants we previously defined.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Check out the <a target="_blank" href="https://console.groq.com/docs/models">Groq Documentation</a> for the list of models supported</div>
</div>

<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(g *GroqClient)</span> <span class="hljs-title">ChatCompletion</span><span class="hljs-params">(llmModel <span class="hljs-keyword">string</span>, systemPrompt <span class="hljs-keyword">string</span>, prompt <span class="hljs-keyword">string</span>)</span> <span class="hljs-params">(*<span class="hljs-keyword">string</span>, error)</span></span> {
    <span class="hljs-comment">// Determine the model to use</span>
    llm := llmModel
    <span class="hljs-keyword">if</span> llmModel == <span class="hljs-string">""</span> {
        llm = LLMModelLlama38b
    }

    <span class="hljs-comment">// Create messages slice</span>
    groqMessages := <span class="hljs-built_in">make</span>([]GroqMessage, <span class="hljs-number">0</span>)

    <span class="hljs-comment">// Add system message if provided</span>
    <span class="hljs-keyword">if</span> systemPrompt != <span class="hljs-string">""</span> {
        systemMessage := GroqMessage{
            Role:    SYSTEM,
            Content: systemPrompt,
        }
        groqMessages = <span class="hljs-built_in">append</span>(groqMessages, systemMessage)
    }

    <span class="hljs-comment">// Add user prompt message</span>
    <span class="hljs-keyword">if</span> prompt != <span class="hljs-string">""</span> {
        userMessage := GroqMessage{
            Role:    USER,
            Content: prompt,
        }
        groqMessages = <span class="hljs-built_in">append</span>(groqMessages, userMessage)
    } <span class="hljs-keyword">else</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>

, fmt.Errorf(<span class="hljs-string">"prompt is required"</span>)
    }

    <span class="hljs-comment">// Create request payload</span>
    chatCompletionRequest := &amp;ChatCompletionRequest{
        Messages:    groqMessages,
        Model:       llm,
        Temperature: <span class="hljs-number">0</span>,
        MaxTokens:   <span class="hljs-number">1024</span>,
        TopP:        <span class="hljs-number">1</span>,
        Stream:      <span class="hljs-literal">false</span>,
        Stop:        <span class="hljs-literal">nil</span>,
    }

    <span class="hljs-comment">// Serialize request to JSON</span>
    chatCompletionRequestJson, err := json.Marshal(chatCompletionRequest)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-comment">// Construct the final URL</span>
    chatCompletionUrl := <span class="hljs-string">"/v1/chat/completions"</span>
    finalUrl := fmt.Sprintf(<span class="hljs-string">"%s%s"</span>, apiBaseUrl, chatCompletionUrl)

    <span class="hljs-comment">// Create HTTP request</span>
    req, err := http.NewRequest(http.MethodPost, finalUrl, bytes.NewBuffer(chatCompletionRequestJson))
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-comment">// Set headers</span>
    req.Header.Set(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json"</span>)
    req.Header.Set(<span class="hljs-string">"Authorization"</span>, fmt.Sprintf(<span class="hljs-string">"Bearer %s"</span>, g.ApiKey))

    <span class="hljs-comment">// Execute HTTP request</span>
    client := &amp;http.Client{}
    resp, err := client.Do(req)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-comment">// Check response status code</span>
    <span class="hljs-keyword">if</span> resp.StatusCode != <span class="hljs-number">200</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"unexpected status code: %d, reason: %s"</span>, resp.StatusCode, resp.Status)
    }

    <span class="hljs-comment">// Read response body</span>
    <span class="hljs-keyword">defer</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(Body io.ReadCloser)</span></span> {
        err = Body.Close()
        <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
            fmt.Println(<span class="hljs-string">"Error:"</span>, err)
        }
    }(resp.Body)

    body, err := io.ReadAll(resp.Body)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-comment">// Parse response JSON</span>
    chatCompletionResp := &amp;ChatCompletionResponse{}
    err = json.Unmarshal(body, &amp;chatCompletionResp)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
    }

    <span class="hljs-comment">// Extract the translated text</span>
    <span class="hljs-keyword">var</span> content <span class="hljs-keyword">string</span>
    <span class="hljs-keyword">if</span> chatCompletionResp.Choices != <span class="hljs-literal">nil</span> &amp;&amp; <span class="hljs-built_in">len</span>(chatCompletionResp.Choices) &gt; <span class="hljs-number">0</span> {
        content = chatCompletionResp.Choices[<span class="hljs-number">0</span>].Message.Content
    } <span class="hljs-keyword">else</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"no choices"</span>)
    }

    <span class="hljs-keyword">return</span> &amp;content, <span class="hljs-literal">nil</span>
}
</code></pre>
<p><strong>Determine the Model to Use:</strong> We start by setting the model to use for the request. If none is provided, we default to <code>LLMModelLlama38b</code>.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Determine the model to use</span>
llm := llmModel
<span class="hljs-keyword">if</span> llmModel == <span class="hljs-string">""</span> {
    llm = LLMModelLlama38b
}
</code></pre>
<p><strong>Create Messages Slice:</strong> We create a slice to hold the messages for the chat completion request.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Create messages slice</span>
groqMessages := <span class="hljs-built_in">make</span>([]GroqMessage, <span class="hljs-number">0</span>)
</code></pre>
<p><strong>Add System Message:</strong> If a system prompt is provided, we add it as a system message.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Add system message if provided</span>
<span class="hljs-keyword">if</span> systemPrompt != <span class="hljs-string">""</span> {
    systemMessage := GroqMessage{
        Role:    SYSTEM,
        Content: systemPrompt,
    }
    groqMessages = <span class="hljs-built_in">append</span>(groqMessages, systemMessage)
}
</code></pre>
<p><strong>Add User Prompt Message:</strong> We add the user prompt as a message. If the prompt is empty, we return an error.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Add user prompt message</span>
<span class="hljs-keyword">if</span> prompt != <span class="hljs-string">""</span> {
    userMessage := GroqMessage{
        Role:    USER,
        Content: prompt,
    }
    groqMessages = <span class="hljs-built_in">append</span>(groqMessages, userMessage)
} <span class="hljs-keyword">else</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"prompt is required"</span>)
}
</code></pre>
<p><strong>Create Request Payload:</strong> We construct the request payload with the messages, model, and other parameters.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Create request payload</span>
chatCompletionRequest := &amp;ChatCompletionRequest{
    Messages:    groqMessages,
    Model:       llm,
    Temperature: <span class="hljs-number">0</span>,
    MaxTokens:   <span class="hljs-number">1024</span>,
    TopP:        <span class="hljs-number">1</span>,
    Stream:      <span class="hljs-literal">false</span>,
    Stop:        <span class="hljs-literal">nil</span>,
}
</code></pre>
<p><strong>Serialize Request to JSON:</strong> We serialize the request payload to JSON format.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Serialize request to JSON</span>
chatCompletionRequestJson, err := json.Marshal(chatCompletionRequest)
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
}
</code></pre>
<p><strong>Construct the Final URL:</strong> We build the final URL for the API endpoint.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Construct the final URL</span>
chatCompletionUrl := <span class="hljs-string">"/v1/chat/completions"</span>
finalUrl := fmt.Sprintf(<span class="hljs-string">"%s%s"</span>, apiBaseUrl, chatCompletionUrl)
</code></pre>
<p><strong>Create HTTP Request:</strong> We create a new HTTP POST request with the serialized JSON payload.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Create HTTP request</span>
req, err := http.NewRequest(http.MethodPost, finalUrl, bytes.NewBuffer(chatCompletionRequestJson))
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
}
</code></pre>
<p><strong>Set Headers:</strong> We set the necessary headers, including the authorization header with the API key.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Set headers</span>
req.Header.Set(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"application/json"</span>)
req.Header.Set(<span class="hljs-string">"Authorization"</span>, fmt.Sprintf(<span class="hljs-string">"Bearer %s"</span>, g.ApiKey))
</code></pre>
<p><strong>Execute HTTP Request:</strong> We send the HTTP request using an HTTP client.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Execute HTTP request</span>
client := &amp;http.Client{}
resp, err := client.Do(req)
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
}
</code></pre>
<p><strong>Check Response Status Code:</strong> We check the response status code to ensure it's 200 (OK).</p>
<pre><code class="lang-go"><span class="hljs-comment">// Check response status code</span>
<span class="hljs-keyword">if</span> resp.StatusCode != <span class="hljs-number">200</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"unexpected status code: %d, reason: %s"</span>, resp.StatusCode, resp.Status)
}
</code></pre>
<p><strong>Read Response Body:</strong> We read the response body and defer the closing of the body.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Read response body</span>
<span class="hljs-keyword">defer</span> <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(Body io.ReadCloser)</span></span> {
    err = Body.Close()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        fmt.Println(<span class="hljs-string">"Error:"</span>, err)
    }
}(resp.Body)

body, err := io.ReadAll(resp.Body)
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
}
</code></pre>
<p><strong>Parse Response JSON:</strong> We parse the response JSON into a <code>ChatCompletionResponse</code> struct.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Parse response JSON</span>
chatCompletionResp := &amp;ChatCompletionResponse{}
err = json.Unmarshal(body, &amp;chatCompletionResp)
<span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, err
}
</code></pre>
<p><strong>Extract Translated Text:</strong> We extract the translated text from the response and return it.</p>
<pre><code class="lang-go"><span class="hljs-comment">// Extract the translated text</span>
<span class="hljs-keyword">var</span> content <span class="hljs-keyword">string</span>
<span class="hljs-keyword">if</span> chatCompletionResp.Choices != <span class="hljs-literal">nil</span> &amp;&amp; <span class="hljs-built_in">len</span>(chatCompletionResp.Choices) &gt; <span class="hljs-number">0</span> {
    content = chatCompletionResp.Choices[<span class="hljs-number">0</span>].Message.Content
} <span class="hljs-keyword">else</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">nil</span>, fmt.Errorf(<span class="hljs-string">"no choices"</span>)
}

<span class="hljs-keyword">return</span> &amp;content, <span class="hljs-literal">nil</span>
</code></pre>
<h2 id="heading-running-the-application">Running the Application</h2>
<p>Before running the application, make sure to set the <code>GROQ_API_KEY</code> environment variable with your Groq API key. You can do this in your terminal:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">export</span> GROQ_API_KEY=your_api_key_here
</code></pre>
<p>Now, run your Go application:</p>
<pre><code class="lang-bash">go run main.go groq_client.go
</code></pre>
<p>If everything is set up correctly, you should see the translated text printed in your terminal.</p>
<h3 id="heading-download-source-code-in-github">Download Source Code in Github</h3>
<p><strong>The full source code for this tutorial is available in Github. Don't forget to</strong> ⭐ <strong>it.</strong></p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/donvito/learngo/tree/master/ai/go-groq">https://github.com/donvito/learngo/tree/master/ai/go-groq</a></div>
<p> </p>
<p>In this blog post, we've walked through the process of using Go to translate text using the Groq Fast Inference API. We covered how to set up your environment, make API requests, and handle the responses to perform text translation. By following these steps, you can easily integrate the Groq Fast Inference API into your Go applications for various language processing tasks. Happy coding!</p>
<hr />
<p>You might want to check my other blog post about Groq Function calling</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://blog.donvitocodes.com/integrating-generative-ai-with-real-time-data-from-apis-groq-python-and-go">https://blog.donvitocodes.com/integrating-generative-ai-with-real-time-data-from-apis-groq-python-and-go</a></div>
<p> </p>
<hr />
<p>If you're interested in learning more about integrating Go with Generative AI, follow this blog for more tutorials and insights. This is just the start!</p>
<p>I do live coding in <a target="_blank" href="https://twitch.tv/donvitocodes"><strong>Twitch</strong></a> and <a target="_blank" href="https://youtube.com/donvitocodes"><strong>Youtube</strong></a>. You can follow me if you'd like to ask me questions when I go live. I also post in <a target="_blank" href="https://www.linkedin.com/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&amp;followMember=melvinvivas"><strong>LinkedIn</strong></a>, you can connect with me there as well.</p>
]]></content:encoded></item></channel></rss>