Up to Main Index                          Up to Journal for February, 2025

                    JOURNAL FOR FRIDAY 28TH FEBRUARY, 2025
______________________________________________________________________________

SUBJECT: ai4all — Demystifying AI For Humans
   DATE: Sat  1 Mar 04:02:55 GMT 2025

This is a written version of the webinar[1] “ai4all — Demystifying AI For
Humans”. I presented the webinar in December 2024 as a guest speaker of The
Institution of Analysts and Programmers[2], of which I am a member.


                     ai4all — Demystifying AI For Humans
                     ‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
So, ai4all — a grandiose title. The computer industry is more hyped up on AI
today than a toddler on a sugar rush. It is predicted that by 2030 investment
in AI will approach $2 trillion dollars (Faist Group, Italy). The future of AI
remains open and exciting. As smarter systems are developed, it’s important to
focus on creating an ethical and beneficial world powered by AI. Nobody should
be left behind.

Through this webinar I want to show that everybody, from new developers to
senior managers, can EASILY have a safe, private, powerful AI at their
fingertips on their own computer at no cost. Be it for assisting them with
work, for learning, for experimentation or simply satisfying their own
curiosity.

To start with, a few basic concepts:

  • Artificial Intelligence (AI) refers to the development of computer systems
    or machines that can perform tasks commonly associated with intelligent
    beings.

  • Machine Learning (ML) is a subset of AI that involves training machines to
    learn from data without being explicitly programmed where to look for
    features.

  • Large Language Model (LLM) are models that learn to process and understand
    language like humans, trained on large amounts of data.

  • Generative AI (GENAI) is a subset of AI that focuses on creating new
    content, such as images, videos, and text, based on existing data.

When interacting with an AI we write prompts. Here is a simple prompt asking
an AI to produce 3 haiku — a traditional Japanese poem with three lines:


  write 3 haiku about AI


The AI responds with 3 haiku:


    Code sings, learns, grows.
    Machine mind, vast and cold.
    Can it feel human?
                                                       Mirror to the soul,
                                          Yet built by human hand’s grace.
                                                      What will it create?
    Will it understand?
    The heart that beats, the human soul,
    Echoes in code.


Getting started in AI is hard for a number of reasons:

  • Complicated because every AI vendor has their own developer’s toolkit and
    their own set of APIs you need to learn.

  • Expensive to use due to vendors offering AI as a cloud service or through
    subscriptions. Usually charged based on the number of input & output
    tokens used.

  • Prohibitive to train your own AI to create an LLM unless you already have
    a server farm and large amounts of training data.

  • Opaque because you don’t know what training data was used, what safeguards
    were in place or if the data is biased or censored.

I’ve been working with Google’s Gemini AI a lot and I wanted to have my own AI
for running examples and experimentation. From this itch to have my own AI, I
drew up a wish list of features I wanted…

  • An AI I could run locally on modest (old) hardware.
  • Is free to use and not charged by the token.
  • Is easy to interact with and to integrate with.
  • Works with multiple, high-quality models.
  • Is private and shares no data.
  • Not a toy, can be used for real work, learning and experimentation.
  • Easy to setup and running in less than 15 minutes by anybody.
  • That was all…

My wish was granted in the form of Ollama. Ollama is an open source project
that makes it very easy to have your own powerful AI. It makes interacting
with an AI simple and allows you to easily switch between multiple models.

Installation is dead easy for Mac, Windows and Linux:


  1. Goto the Ollama website: https://ollama.com
  2. Click on the “Download” button in the middle of the home page.
  3. Run the file that was just downloaded.


You can also do a simple manual installation, useful on Linux, in virtual
machines or in containers:


  > mkdir ~/ollama
  > cd ~/ollama
  > curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama.tgz
  > tar –zxvf ./ollama.tgz
  > cd ~/ollama/bin
  > ./ollama serve


If you have very specific security, compliance or auditing requirements you
can download all of the Ollama source code, examine and audit it, put it under
version control, integrate it with CI-CD pipelines and compile everything
yourself.

To interact with Ollama you first need to pull down a model. Ollama maintains
a library of models that can be used, select the “Models” link on the Ollama
home page. Some models of note:


       Google: gemma2
    Microsoft: phi
          IBM: granite
         Meta: llama
    MistralAI: mistral
      Alibaba: qwen
     DeepSeek: deepseek-r1


We will be using a model called llava which is multi-modal (it can handle
images). Specifically we will be using llava:7b, the 7 billion parameter
model. In a terminal well pull a local copy of the model:


    >ollama pull llava:7b
    pulling manifest
    pulling 170370233dd5... 100% ▕█████████████████████████▏ 4.1 GB
    pulling 72d6f08a42f6... 100% ▕█████████████████████████▏ 624 MB
    pulling 43070e2d4e53... 100% ▕█████████████████████████▏  11 KB
    pulling c43332387573... 100% ▕█████████████████████████▏   67 B
    pulling ed11eda7790d... 100% ▕█████████████████████████▏   30 B
    pulling 7c658f9561e5... 100% ▕█████████████████████████▏  564 B
    verifying sha256 digest
    writing manifest
    success
    >


If you pull a model you already have, it will check for updates to the model.

Now we can have a conversation with our new AI:


    >ollama run llava:7b
    >>> hi!
    Hi there! What can I do for you today?
    >>> briefly, what is generative AI?
    Generative AI refers to a class of artificial intelligence algorithms that
    use machine learning techniques to create new and original content. It
    works by learning from existing data sets and generating new data that is
    similar in style and characteristics to the original data. Examples of
    generative AI include Generative Adversarial Networks (GANs), which can be
    used to generate realistic images.
    >>>


When left unused for a period of time Ollama will unloaded models, resources
are freed and Ollama sits idle in the background.

What Can You Do With Ollama? You can…

    • Interactively chat and have a conversation.
    • Generate text (emails, messages, project outlines, etc).
    • Perform image analysis.
    • Categorise and summarize texts.
    • Determine intent and sentiment.
    • Translate text.
    • Query closed domains (private data).
    • And much more…


Prompt Engineering
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
We have seen that you interact with an AI by writing prompts: write 3 haiku
about AI. How you phrase your prompt is called Prompt Engineering.

Prompt engineering is the art of instructing an AI and getting it to produce a
desired result. Prompt engineering is a large topic. For a good reference on
getting started with prompt engineering see: https://www.promptingguide.ai

Just as using a smartphone, writing an email or using a web browser have
become commonplace, interacting with AI will also become commonplace.

For casual use you can chat with an AI, pose questions and get responses. To
get the most out of an AI you will need to know how to prompt it. If you are
integrating with an AI, most of your time will be spent trying to write the
perfect prompt.


Interacting With Ollama
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
There are several ways that Ollama can be interacted with. This section will
cover some of them.

Let’s start Ollama in a familiar chat mode, we can say hi:


    >ollama run llava:7b
    >>>hi
    Hi!

    How can I help you today?
    >>>


Next we will ask a question:


    >>>briefly, what are the primary colours for pigments?
    The three primary colors for pigments are:

    * Red
    * Yellow
    * Blue

    These are considered primary because, when mixed in different proportions,
    they can create a wide range of other colors. This is the same concept as
    primary colors in light (RGB), though pigment mixing relies on different
    chemical interactions.
    >>>


As we are in chat mode we can continue the conversation from the previous
context with a follow up question:

    >>>and for light?
    For light, the three primary colors are:

    * Red
    * Green
    * Blue

    These are often referred to as the RGB (red, green, blue) colors and form
    the basis of digital displays like your computer screen.

    Here’s a little more about how they work:

    * Additive Color Mixing: In light, these colors are added together. A full
      spectrum of red, green, and blue light combined creates white light.
    * Subtractive Color Mixing: In pigments, the colors are mixed by absorbing
      certain wavelengths of light.  That’s why black pigment absorbs all
      wavelengths of visible light.
    >>>

We can also have multi-line prompts using """:

    >>> """
    ... briefly, what are the primary colours
    ... for pigments?
    ... for light?
    >>> """
    Here’s a brief breakdown:

    Pigments:
    * Red, Yellow, and Blue are considered primary colors for pigments.

    Light:
    * Red, Green, and Blue are considered primary colors for light (Additive
      Color Mixing).
    >>>


This is useful if you are copy‘n’pasting blocks of text, such as emails for
example.

Now lets exit chat mode and return to the command line. Here we can use Ollama
and pass a prompt directly on the command line:


    >ollama run llava:7b "briefly, what are the primary colours for pigments?"
    The three primary colors for pigments are:

    * Red
    * Yellow
    * Blue

    These are considered primary because, when mixed in different proportions,
    they can create a wide range of other colors. This is the same concept as
    primary colors in light (RGB), though pigment mixing relies on different
    chemical interactions.
    >


However we cannot continue a conversation from the command line:


    >ollama run llava:7b "and for light?"
    Please provide me with some context!  "For light" is a very broad
    statement.
    >


This is because the context ends when we are returned to the command line.

Another thing we can do on the command line is pass in a prompt from a file.
For example here is our color questions in a plain text file:


    >cat ./color.prompt
    briefly what are the primary colours
    for pigments?
    for light?
    >


We can then pass that text file to Ollama for processing:


    >ollama run llava:7b < ./color.prompt
    Here’s a brief breakdown:

    Pigments:
    * Red, Yellow, and Blue are considered primary colors for pigments.

    Light:
    * Red, Green, and Blue are considered primary colors for light (Additive
      Color Mixing).
    >


Having the prompt in a plain text file has some advantages. It’s testable,
reusable, repeatable, distributable and can be version controlled and
backed-up.

We can ask the AI to produce it’s response as JSON. Here is a modified prompt:


    > cat ./color.prompt
    briefly, as JSON, what are the primary colours
    for pigments?
    for light?
    >


If we now run that prompt with an added “--format json” flag:


    >ollama run --format json llava:7b < ./color.prompt
    {
      "primary colors for pigments": [
        "red",
        "blue",
        "yellow"
      ],
      "primary colors for light": [
        "red",
        "green",
        "blue"
      ]
    }
    >


For the final example we will send our request to the AI using a HTTP POST
request with a JSON payload:


    >curl -s http‍://localhost:11434/api/generate --json '
    {
            "model": "llava:7b",
            "stream": false,
            "format": "json",
            "prompt": "as JSON, what are the primary colours for pigments? for light?"
    }' | jq -r ".response" | jq "."

    {
      "primary colors for pigments": [
        "red",
        "blue",
        "yellow"
      ],
      "primary colors for light": [
        "red",
        "green",
        "blue"
      ]
    }
    >


Note that ‘jq’ is used to extract the response and pretty print the JSON. The
actual full response returned by the AI looks like this:


    {"model":"llava:7b","created_at":"2025-02-28T20:37:12.691371581Z","respon
    se":"{\n\n\"primary colors for pigments\": [\"red\", \"blue\", \"yellow\"
    ],\n\n\"primary colors for light\": [\"red\", \"green\", \"blue\"]\n\n} "
    ,"done":true,"done_reason":"stop","context":[733,16289,28793,390,9292,287
    25,767,460,272,6258,22137,354,18958,1339,28804,354,2061,28804,733,28748,1
    6289,4490,13,13,28739,12525,9304,354,18958,1339,1264,7367,893,548,345,123
    49,548,345,28724,4149,8883,13,13,28739,12525,9304,354,2061,1264,7367,893,
    548,345,13234,548,345,12349,2242,13,13,28752,28705],"total_duration":3428
    110740,"load_duration":1430812937,"prompt_eval_count":23,"prompt_eval_dur
    ation":320000000,"eval_count":42,"eval_duration":1674000000}


This is where things get interesting. If we can interact with the AI using a
REST API to send JSON, and get results back as JSON we can start to put
together our own tools and services using the capabilities of the AI ;)


Prompting Examples
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Now it’s time to see some of Ollama’s capabilities through some example
prompts. The prompts are not Ollama specific and should work for most AIs.

Drafting Emails – Annual Leave
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Starting off with a simple example. Here we are prompting the AI to write an
email asking for some annual leave. The AI responds with an email template we
can cut‘n’paste into our email client.


  >>> """
  ... write a semi-formal email to the HR team for 5 days annual leave next
  ... week from 19th - 23rd August to relax and rest.
  ... """


  Subject: Leave of Absence Request — {Your Name}

  Hi HR Team,

  This email is to request five days of leave from August 19th to August 23rd,
  for personal time off. I’d like to use this period to relax and recharge.

  Please let me know if you require any further information.

  Thank you,
  {Your Name}


Drafting Emails – Sales
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Here we are prompting the AI with information about a new product and some
contact details. We want the AI to write a sales email to a customer
announcing the new product. The AI responds with a nice sales email ready to
be cut-n-pasted. It has all the product details and features, contact
information and links for the website and email address.


  >>> """
  ... ACME Retail is about to begin selling its newest software product, ACME
  ... CRM. ACME CRM is a cloud-based customer relationship management (CRM)
  ... system that helps retailers improve their sales, marketing, and customer
  ... service.
  ...
  ... With ACME CRM, you can:
  ... - Track customer interactions and history.
  ... - Manage your sales pipeline.
  ... - Create and track marketing campaigns.
  ... - Provide better customer service.
  ...
  ... Bob Smith’s details: ACME Sales Team, sales@acme.com, (555) 987-6543
  ... ACME Retail website: www.acme-retail.com
  ...
  ... write an email to a customer Clare from Bob announcing the new product.
  ... """


  Subject: Introducing ACME CRM — Improve Your Sales, Marketing, and Customer
           Service Today!

  Dear Clare,

  I hope this email finds you well. I’m excited to announce that ACME Retail
  has recently launched our latest software product, ACME CRM. This
  cloud-based customer relationship management (CRM) system is specifically
  designed to help retailers like yours enhance their sales, marketing, and
  customer service efforts.

  With ACME CRM, you can track customer interactions and history, manage your
  sales pipeline, create and track marketing campaigns, and provide better
  customer service. Our user-friendly interface ensures that everyone on your
  team can easily access the information they need to make informed decisions
  and drive success for your business.

  We believe that ACME CRM will be a valuable addition to your retail
  operations, helping you streamline processes, increase efficiency, and
  ultimately boost sales and customer satisfaction. To learn more about ACME
  CRM or to schedule a demo, please don’t hesitate to contact me at
  sales‍@acme.com or (555) 987-6543.

  In the meantime, you can visit our website at www.acme-retail.com for more
  information about our products and services.

  Thank you for your interest in ACME CRM, and we look forward to hearing from
  you soon.

  Best regards,
  Bob Smith
  ACME Sales Team   sales‍@acme.com   (555) 987-6543   www.acme-retail.com


Drafting A Blog/Social Media Post
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
What if we take the same product information and contact details from the
previous example, but this time prompt the AI to write a product announcement
blog post for us? In this prompt only the last line has been modified from the
previous example…


  >>> """
  ... ACME Retail is about to begin selling its newest software product, ACME
  ... CRM. ACME CRM is a cloud-based customer relationship management (CRM)
  ... system that helps retailers improve their sales, marketing, and customer
  ... service.
  ...
  ... With ACME CRM, you can:
  ... - Track customer interactions and history.
  ... - Manage your sales pipeline.
  ... - Create and track marketing campaigns.
  ... - Provide better customer service.
  ...
  ... Bob Smith’s details: ACME Sales Team, sales@acme.com, (555) 987-6543
  ... ACME Retail website: www.acme-retail.com
  ...
  ... write a blog post announcing the new product.
  ... """


  Introducing ACME CRM: The Perfect Tool for Retailers to Boost Sales,
                        Marketing, and Customer Service!

  We are excited to announce the launch of our newest software product, ACME
  CRM! This cloud-based customer relationship management (CRM) system is
  designed to help retailers take their sales, marketing, and customer service
  efforts to the next level.

  With ACME CRM, you can easily track customer interactions and history,
  allowing you to tailor your approach to each individual customer. You’ll be
  able to manage your sales pipeline with ease, ensuring that no leads fall
  through the cracks. You can also create and track marketing campaigns,
  giving you valuable insights into their effectiveness and helping you make
  data-driven decisions.

  But that’s not all! With ACME CRM, you can provide better customer service
  by having all of your customer information at your fingertips. This will
  allow you to respond quickly and efficiently to customer inquiries,
  resolving any issues that may arise and ensuring that your customers are
  happy and satisfied.

  At ACME Retail, we understand the importance of staying ahead of the
  competition. That’s why we’ve developed ACME CRM to help retailers
  streamline their processes and improve their overall performance. With this
  powerful tool at your disposal, you’ll be able to focus on what really
  matters: growing your business and providing exceptional experiences for
  your customers.

  If you’re interested in learning more about ACME CRM or how it can benefit
  your retail business, please don’t hesitate to contact our sales team at
  sales‍@acme.com or visit our website at www.acme-retail.com. We look forward
  to helping you take your customer relationships to the next level!


Planning a Presentation
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Here we are prompting the AI to create an outline for a presentation covering
some specific topics. The AI responds with a complete presentation outline,
with suggestions for content to include.


  >>> """
  ... create a plan for a presentation on using Ollama to provide AI
  ... capabilities with a quick overview of AI, ML, LLMs and Generative AI.
  ... Plus examples of what Ollama can do.
  ... """


    I. Introduction
       • Briefly introduce the topic of AI and its importance in modern society
       • Introduce Ollama as a tool that allows for easy integration of AI
         capabilities into applications
   II. Overview of AI, ML, LLMs and Generative AI
       • Provide a brief overview of what each term refers to
       • Explain how these technologies are related and how they can be used
         together
  III. Examples of what Ollama can do
       • Provide examples of how Ollama has been used in various industries,
         e.g. finance, healthcare & marketing
       • Highlight specific features & capabilities of Ollama, e.g. natural
         language processing & image recognition
   IV. How to use Ollama
       • Explain the process for integrating Ollama into an application
       • Discuss the different options for customizing and training Ollama’s
         AI models
   V. Conclusion
       • Summarize the key points of the presentation
       • Encourage audience members to consider using Ollama in their own
         projects
   VI. Q&A
       • Allow time for audience to ask questions about the content covered
         in the presentation.


Summarising Text
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
We are all time poor and would love to read all of the interesting articles
that come our way. What if we saved time by asking the AI to summarize those
articles for us into a few bullet points?

Here is an example from an article on quantum computers. We ask the AI to
summarise the article into four bullet points.


  >>> """
  ... Provide a very short summary in four bullet points for the following
  ... article:
  ...
  ... Our quantum computers work by manipulating qubits in an orchestrated
  ... fashion that we call quantum algorithms. The challenge is that qubits
  ... are so sensitive that even stray light can cause calculation errors —
  ... and the problem worsens as quantum computers grow. This has significant
  ... consequences, since the best quantum algorithms that we know for running
  ... useful applications require the error rates of our qubits to be far
  ... lower than we have today. To bridge this gap, we will need quantum error
  ... correction. Quantum error correction protects information by encoding it
  ... across multiple physical qubits to form a ‘logical qubit,’ and is
  ... believed to be the only way to produce a large-scale quantum computer
  ... with error rates low enough for useful calculations. Instead of
  ... computing on the individual qubits themselves, we will then compute on
  ... logical qubits. By encoding larger numbers of physical qubits on our
  ... quantum processor into one logical qubit, we hope to reduce the error
  ... rates to enable useful quantum algorithms.
  ...
  ... Bullet points:
  ... """


  Here is a four-bullet point summary of the article:

  • Quantum computers rely on manipulating qubits in precise ways, but these
    qubits are highly sensitive. This sensitivity leads to errors that worsen
    as the computer grows larger.

  • To overcome this challenge, we need quantum error correction (QEC). QEC
    protects information by encoding it across multiple physical qubits into a
    logical qubit.

  • QEC allows for working with "logical qubits" rather than individual
    physical qubits. This improves efficiency by reducing errors and enables
    the execution of useful quantum algorithms.

  • By using QEC, we aim to achieve error rates low enough for practical
    quantum computing applications. This opens doors to solving complex
    problems that are currently beyond traditional computers’ abilities.


Conversation Summarising & Todo Lists
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Here we have a more complex example. A chat conversation between a customer
and a support agent. We prompt the AI with the chat history and ask it to
provide a summary of the conversation and to generate a list of tasks for the
support agent based on what is discussed. The AI responds with a summary and a
task todo list. As we’ll see later this can be produced as a JSON response.
Combined with REST APIs or Agentic AI it would be possible to automatically
update a CRM or raise the todos as actionable items.


  >>> """
  ... Generate a summary of the following conversation and at the end
  ... summarize the todo’s for the support Agent to complete post
  ... conversation:
  ...
  ...      Customer: Hi, I’m Larry, and I received the wrong item.
  ... Support Agent: Hi, Larry. How would you like to see this resolved?
  ...      Customer: That’s alright. I want to return the item and get a
  ...                refund, please.
  ... Support Agent: Of course. I can process the refund for you now. Can I
  ...                have your order number, please?
  ...      Customer: It’s AZ1289.
  ... Support Agent: Thank you. I’ve processed the refund, and you will
  ...                receive your money back within 14 days.
  ...      Customer: Thank you very much.
  ... Support Agent: You’re welcome, Larry. Have a good day!
  ...
  ... Summary:
  ... """


  SUMMARY
  Larry contacted support to report receiving an incorrect item. He requested
  a refund for the returned item. The agent processed the refund immediately
  and confirmed that the customer would receive his money back within 14 days.
  The conversation concluded with Larry expressing gratitude and the agent
  wishing him well.

  TODO’S FOR SUPPORT AGENT
  • Process refund: Complete the refund request based on order number (AZ1289).
  • Confirm refund processing: Inform Larry that the refund has been processed
    and he will receive it in 14 days.
  • Update customer information: Update the CRM system to reflect Larry’s case
    details.
  • Check for additional issues: If Larry indicated any specific issue beyond
    the incorrect item, address them if necessary.
  • Follow-up (optional): If desired, follow up with Larry within 3 business
    days to ensure he received his refund and is satisfied. This can be done
    via email or a phone call.


Image Analysis
‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Here we have an image of a backpack on a trail. We prompt the AI to generate a
title, caption and description for the image. Something that could be useful
if you need to populate a large product catalogue.


  >>> """
  ... Given image: ./backpack.png
  ...
  ... Suggest a title, caption and briefly, in a few sentences, describe what
  ... is happening in the given image.
  ... """


        Title: Alpine Wonders: A Hiker’s Paradise
      Caption: Exploring the Great Outdoors: A Hiker’s Journey.
  Description: The image captures a serene moment on an alpine trail. The
               scene is teeming with life, from the colourful wildflowers
               carpeting the hillside to the rugged mountains that tower in
               the background. A hiker, prepared for the adventure ahead, has
               their backpack resting nearby, perhaps taking a moment to enjoy
               the breathtaking view or admire the vibrant flora at their
               feet. The image evokes a sense of tranquillity and the allure
               of exploring nature’s wonders.


Text Translation
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Next we have three messages written in different languages. We prompt the AI
to list the original message, the language the message is written in and a
translation.


  ... """
  ... For each message below provide the original message, the language used
  ... and an English translation:
  ...
  ... Message: Witaj świecie
  ... Message: હેલો વર્લ્ડ
  ... Message: 你好世界
  ... """


  Message: Witaj świecie (Polish language)
  English translation: Hello world

  Message: હેલો વર્લ્ડ (Gujarati language)
  English translation: Hello world

  Message: 你好世界 (Simplified Chinese language)
  English translation: Hello world


Sentiment Determination (Multi-Shot Prompt)
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Here we have a different type of prompt. We want the AI to determine the
sentiment of film reviews. This prompt has 3 examples of film reviews showing
the AI how we expect it to respond. At the end of the prompt we pose our own
review, “It was too long, too dark and the acting was bad”, with an empty
output for the AI to fill in.


  ... """
  ... You are an AI built to extract the sentiment of sentences.
  ...
  ...  Input: A well-made and entertaining film
  ... Output: Positive
  ...
  ...  Input: I fell asleep after 10 minutes
  ... Output: Negative
  ...
  ...  Input: The movie was ok
  ... Output: Neutral
  ...
  ...  Input: It was too long, too dark and the acting was bad.
  ... Output:
  ... """


  Negative


Classifying Intent
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
In this example we have requests from three users. For each request we prompt
the AI to determine the user’s intent. This can be useful for directing a user
to an appropriate service or area of a website, for example.


  >>> """
  ... Given a user’s input, classify their intent, such as "finding
  ... information", "making a reservation", or "placing an order".
  ...
  ... User: Hi, can you please book a table for two at Juan for May 1?
  ... User: Hi, I’d like a burger and fries.
  ... User: Hi, do you sell insurance?
  ... """


  Here’s a breakdown of the user intents based on their provided inputs:

    User: Hi, can you please book a table for two at Juan for May 1?
  Intent: Making a reservation (specific request for booking)

    User: Hi, I’d like a burger and fries.
  Intent: Placing an order (explicitly requesting food)

    User: Hi, do you sell insurance?
  Intent: Seeking information / product knowledge
          (asking about a product or service, not a specific action yet)


Product Feedback Analysis
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
This prompt is the most complex example I have. It contains details about a
product, a teddy bear, and a list of customer reviews and comments. We prompt
the AI to perform 4 tasks: create a product description, extract the product
specifications, summarize what the customers think of the product and to
determine the over customer sentiment of the product.

What is of particular interest is the summary of how customers feel, the AI
has picked up on the fact some customers would prefer a smaller or larger
version of the product. Sounds to me like a good sales opportunity!


  >>> """
  ... You are an AI built to extract the sentiment of product reviews.
  ...             The product is: Soft Cuddly Teddy Bear
  ...       The product price is: £31.10
  ... The product description is:
  ...
  ... Here is a teddy bear from the Soft Cuddly Friends family! The cuddly
  ... friend made of cuddly soft plush invites you to cuddle and play. The
  ... 30cm teddy bear is always radiantly available if you want to cuddle
  ... happily or comforting. The wavy, wonderfully soft plush is tan. The cute
  ... black nose and smiling mouth are embroidered black. Not only in the
  ... arms, it shows itself as a good friend. It can also sit perfectly as a
  ... silent observer and eye-catcher anywhere. For better grip, a granulate
  ... bag is incorporated into its buttocks, as well as in the paws and soles.
  ... It is also ideal as a companion when travelling. The teddy bear is a
  ... loyal friend for life. A teddy bear simply belongs in every child’s room
  ... and thus into the lives of children, the whole family.
  ...
  ... Product reviews:
  ... Lovely and soft, very well made — as you’d expect from Steiff!
  ... Have bought a few different Steiff soft toys and they’re all so lovely.
  ... Well made and perfect for young children.
  ... They also wash really well which is handy for when they need it, without
  ...   worrying about ruining them.
  ... This is for my new granddaughter, it is so cute and cuddly!
  ... This bear is suitable for any child from birth. He’s gorgeous!
  ... A nice size for a new baby. Traditional pointed snout. Lovely squishy
  ...   body. I hope teddy bear will be well loved.
  ... I love teddies for the new babes in the family. Lovely little keepsake.
  ... Soft bear. Not too big. Perfect first bear.
  ... This is a classic teddy bear with a lovely soft, cuddly fur — a keeper.
  ...
  ... Perform the following tasks:
  ... 1. In one short paragraph describe the product.
  ... 2. Describe the specifications of the product.
  ... 3. Based on the reviews, briefly summarise the overall impression of the
  ...    product, likes and dislikes.
  ... 4. What is the overall sentiment of the comments?
  """


  Product Analysis of Soft Cuddly Teddy Bear

  1. Description: This teddy bear (30cm) is crafted from soft plush fabric in
  a tan colour. It features embroidered details like a black nose and smiling
  mouth, and comes with added grip for comfortable holding. It’s perfect as a
  cuddly companion for children of all ages, making it ideal for play or
  relaxation at home or on the go.

  2. Specifications:
          Size: 30cm (approx.)
      Material: Plush fabric
        Colour: Tan
      Features: Embroidered details (nose & mouth), granulate bag in buttocks,
                paws, and soles for grip

  3. Overall Impression from Reviews: The teddy bear is highly praised for its
  quality, softness, and    cuteness. Many reviewers highlight its durability
  and suitability for both children and adults. They appreciate the classic
  Teddy Bear design and the added features like easy-to-wash fabric. Concerns
  mainly revolve around size, with some preferring a smaller or bigger
  version.

  4. Sentiment of Comments: The overall sentiment is overwhelmingly positive,
  showcasing strong appreciation for this teddy bear’s quality & potential as
  a cherished companion.


Bridging the Gap
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
We just saw an example of a complex prompt. What about a prompt that produces
complex output, where the AI can really show off? Maybe get it to do a little
research for us? I’ve been musing over the implications of AI coming together
with quantum computing – they are both grounded in probabilities and seem to
be a natural fit for each other. First we have the prompt used to produce a
research paper on this topic. The prompt is stored in a plain text file. The
prompt is followed by the processing steps I performed to turn the generated
output into a PDF:

  1) run Ollama with the prompt as input and output a markdown file
  2) feed the markdown into a processor and generate the HTML file
  3) some very minimal light editing
  4) open the HTML in a browser
  5) save as a PDF

About 10 minutes work including waiting on the AI. But what does the final PDF
look like?


  > cat BridgingTheGap.prompt
  You are a research assistant skilled in writing research papers. You are
  very careful not to repeat phrases and sentences in your response.
  Generate a very detailed research paper on the feasibility and benefits of
  combining LLMs with quantum computing to take advantage of the fact that
  LLMs and quantum computing both deal with probabilities.
  Include a review of the latest advances in AI, ML, LLMs and Generative AI.
  Include the latest advances in quantum computing.
  Explore the potential of the two technologies intersecting.
  Include an analysis of current technology limitations and potential
  solutions.
  Include numbered cross-references to citations.
  Include full citation references to all cited materials.


  PROCESSING STEPS:
    • ollama run gemma2:2b < BridgingTheGap.prompt > BridgingTheGap.md
    • lowdown -s --html-no-escapehtml --html-no-skiphtml \
          BridgingTheGap.md > BridgingTheGap.html
    • some very light editing of the HTML (title, add text justification)
    • open the HTML in a browser and use the browser to save as PDF

  RESULT: ./BridgingTheGap.pdf


Using The REST API
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
As well as prompting an AI through a chat like interface, we can interact with
the AI programmatically, using REST APIs. This example is a BASH script. It
makes a HTTP POST request to prompt the AI with our translation example from
earlier. The response is then formatted using some standard tools before being
displayed. Using the Rest API any application or programming language that can
make a HTTP Post request can integrate with, and make full use of, the AI’s
capabilities.


        #!/bin/bash -x
        prompt=$(
        cat <<-EOT
          For each Message given below provide the original message,
          the language used and an English translation:

          Message: Witaj świecie
          Message: હેલો વર્લ્ડ
          Message: 你好世界
        EOT
        )
        prompt=${prompt//$'\n'/}

        json=$(
        cat <<-EOT
          {
            "model": "gemma2:2b",
            "stream": false,
            "prompt": "${prompt}"
          }
        EOT
        )

        data=$(curl http‍://localhost:11434/api/generate -s --json "$json")
        echo "$data" | jq -r '.response' | lowdown -tterm --term-no-ansi


  Message 1: Witaj świecie
    • Language: Polish
    • English Translation: Hello world

  Message 2: હેલો વર્લ્ડ
    • Language: Gujarati
    • English Translation: Hello World

  Message 3: 你好世界
    • Language: Chinese (Mandarin)
    • English Translation: Hello, World


You Don’t Need The Latest Or Greatest Machine
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
The machine used for the demos is a tiny 1 litre form factor PC that uses an
older, low power, workstation graphics card and an older generation CPU.


  Lenovo P360 Tiny:
  • 1 Litre PC (7"x7"x1½")
  • NVidia T1000 8Gb
  • 64Gb DDR5-4800MHz
  • Intel 12th Gen i9-12900T
    - 8 performance cores, 16 threads
    - 4 efficiency cores, 8 threads


Performance figures for various models on this system running Ollama 0.5.12:


                     MODEL  TKN/SEC                     MODEL  TKN/SEC
           deepseek-r1:14b    10.46               llama3.2:3b    46.58
          deepseek-r1:1.5b    75.96       llama3.2-vision:11b    14.29
           deepseek-r1:32b     3.58              llama3.3:70b     1.55
           deepseek-r1:70b     1.54                  llava:7b    27.71
            deepseek-r1:7b    24.14                mistral:7b    26.66
            deepseek-r1:8b    23.17               phi3.5:3.8b    20.94
                falcon3:7b    21.19                  phi4:14b    10.71
                 gemma2:2b    39.70                qwen2.5:3b    43.90
                 gemma2:9b    14.13                qwen2.5:7b    23.92
       granite3.1-dense:2b    40.36                   qwq:32b     3.47
       granite3.1-dense:8b    21.62              smollm2:1.7b    45.26
         granite3.1-moe:1b    74.73              smollm2:360m    61.82
         granite3.1-moe:3b    51.21               smollm:360m    93.90
               llama3.1:8b    22.97            tinyllama:1.1b    94.35


The NVidia T1000 8Gb in this system is roughly equivalent to an NVidia GTX
1650. Ollama can achieve much better performance using popular, cheap gaming
graphics cards — at the moment NVIDIA and AMD are well supported.

If a suitable GPU is not available then Ollama can still be used with just a
CPU for processing. Ollama can run on systems with very limited resources.
I’ve had Ollama running smaller AI models, generating 6-12 tokens/second, on
an 8Gb Raspberry Pi 4. It retails for about £70. Performance was comparable to
a fast reading speed, BUT it was usable and adequate for experimenting!

At the other end of the spectrum Ollama can be used to create large commercial
AI servers, either in the cloud or on-prem, with multiple GPUs, copious
amounts of memory and an array of different AI models.

Final thought, what are you going to do with your own AI? :)

--
Diddymus

  [1] ai4all - Demystifying AI For Humans:
      https://youtu.be/CXngaetYmws?si=TIz5FwT9hfJ7CyBQ

  [2] The Institution of Analysts and Programmers: https://www.iap.org.uk


  Up to Main Index                          Up to Journal for February, 2025