You are currently viewing Any LLM Model API + Python Integrations, Hacks & Tips
llm model

Any LLM Model API + Python Integrations, Hacks & Tips

Are you using any LLM model like ChatGPT or Google Bard and you want to know how to count the number of tokens used? Cost of token used? Manage different LLM Model prompts? So you are at the right place friends.

In this blog, I will teach you how to do it. We basically use the LLM model with Python for prompt engineering, automated tasks, or for many such reasons. I assume you might know what prompt engineering is. What Python is? If not you can check our related blogs:

Section Introduction to LLM Model

Now are you ready for an adventure? Buckle up because weโ€™re about to connect Python with OpenAI API or any other LLM model, the brain behind the incredible GPT-3 and GPT-4 AI models.

Set Your API Key

So how do you talk to OpenAI? You need an API Key, a secret code that lets OpenAI know you have a key to open their door.

  1. Youโ€™ll have to create an account on the OpenAI platform.
  2. After signing in, navigate to the API section.
  3. There youโ€™ll find your unique API Key. Make sure to keep it secret, keep it safe. We donโ€™t want any strangers tinkering with your AI!

Here is a short video demonstrating this:

๐Ÿ”ฅPro tip โ€“ donโ€™t share your API key, even if someone promises you a spaceship in return. Itโ€™s more precious than that! ๐Ÿ˜…

Understanding Costs

Before starting, you must know that OpenAIโ€™s API is not free. Yes, you may get some credits if you sign up for the first time.

But then, Every time you ask OpenAI to generate some text, it counts the number of tokens (words, spaces, punctuation) it uses.

And yes, you guessed it, more tokens mean more cost. But donโ€™t worry. Itโ€™s not as scary as it sounds. Weโ€™ll dive deeper into this in lecture 4, where weโ€™ll look at estimating these costs.

It is around $0.003/1000 tokens for theย ChatGPTย Model and ~$0.06 / 1000 tokens for GPT-4 Model.

๐Ÿ’กNote: 1 Token is approximately 4 characters.

๐Ÿ”—You can find more about pricing on OpenAIโ€™s official site. Click here to learn more.

Connect Python With OpenAI LLM Model API

Do you remember the API key from our last lecture? Bring it out; itโ€™s time for it to shine! ๐Ÿ—๏ธโœจ

Without further do. Letโ€™s now see how to generate text with ChatGPT directly from your Python script.

Step 1: Create a new empty Python script file. letโ€™s call it chatgpt.py

.Step 2: Open the Terminal, and install OpenAI Package by running the command:

pip install openai

Step 3: Copy the following script code, and paste it into your script file.

import openai

# replace with your api key
openai.api_key = "sk-beCoSmKAoAP60SFpjc4gT3Blb8FadasdvCdsdL"

def generate_text_with_openai(user_prompt):
    completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",  # you can replace this with your preferred model
        messages=[{"role": "user", "content": user_prompt}],
    )
    return completion.choices[0].message.content

Try it yourself with this Prompt:

As a creative YouTube title generator, craft a unique and captivating title for a video about . The title should be concise, encourage clicks, and may incorporate wordplay or humor. Ensure that it avoids overused phrases and generic titles. 

After creating the title, please discuss how your chosen title effectively conveys the main focus of the video and appeals to potential viewers. Analyze the aspects of the title that make it distinctive and successful in capturing attention among competing content.

NOTE: put the prompt between triple quotes to add multiple line strings in your script.

And voila! You just set up your first connection with ChatGPT! Itโ€™s like we just phoned a friend, but this friend happens to be a super-smart AI. ๐Ÿง 

Remember, a great explorer is not afraid of making mistakes. Theyโ€™re just stepping stones on the path to success. So donโ€™t worry if you stumble a little. Keep going!

Counting Tokens in LLM Model

Weโ€™re journeying further into AI and Python, and todayโ€™s destination? Tokens!

In this lecture, weโ€™ll learn how to count the number of tokens in a prompt or any given response. Why do we need to do this?

Remember our chat about costs in the first lecture? The number of tokens directly impacts that. And who said counting is just for accountants? ๐Ÿ˜…

What is a Token?

Tokens are the units of text that NLP Models understands. They can be as short as one character or as long as one word. For example, โ€œChatGPT is fun!โ€ is 5 tokens:

  1. โ€œChatGPTโ€
  2. โ€ โ€œ
  3. โ€œisโ€
  4. โ€ โ€œ
  5. โ€œfun!โ€

I think we explained this in Section 2. You can go back there if you want ๐Ÿ™‚

Counting tokens can be tricky because itโ€™s not as simple as counting words. A token can be a word, but it could also be a single character or punctuation mark.

Fortunately, you are here. I created a simple function that you can use to count tokens anywhere you want. Here we are:

import tiktoken
def count_tokens(text,selected_model):
    encoding = tiktoken.encoding_for_model(selected_model)
    num_tokens = encoding.encode(text)
    return len(num_tokens)

Letโ€™s see this in action: Watch Video.

There you have it! Youโ€™re now a token-counting wizard! ๐Ÿง™โ€โ™‚๏ธ

Cost Estimation in LLM Model

Donโ€™t Break the Bank! ๐Ÿ˜„

In this lecture, weโ€™re diving into the nitty-gritty world of cost estimation in OpenAI.

Youโ€™re probably thinking, โ€œWeโ€™re coders, not accountants!โ€ But hear me out. Itโ€™s important!

Knowing your costs will ensure that your AI interactions donโ€™t surprise you with a bill that makes your eyes pop out.

After learning to estimate costs, you can simply estimate the costs of your prompts and your scripts before and after running them.

Understanding Costs

The cost is calculated based on the number of tokens processed, both in the input and output. For example, if you input 10 tokens and the output is 20 tokens, youโ€™re billed for 30 tokens in total.

Remember, the AI doesnโ€™t understand words or sentences but tokens. So a long, complicated word can be 1 token, and so can a tiny punctuation mark. Those tiny tokens can really add up, so letโ€™s count them!

Estimating Cost in LLM Model

Again, fortunately, you are here, I created a simple method to calculate the costs, and you are free to use it however you want.

def estimate_input_cost(model_name, token_count):
    if model_name == "gpt-3.5-turbo-0613":
        cost_per_1000_tokens = 0.0015
    if model_name == "gpt-3.5-turbo-16k-0613":
        cost_per_1000_tokens = 0.003
    if model_name == "gpt-4-0613":
        cost_per_1000_tokens = 0.03
    if model_name == "gpt-4-32k-0613":
        cost_per_1000_tokens = 0.06

    estimated_cost = (token_count / 1000) * cost_per_1000_tokens
    return estimated_cost

This function calculates the total number of tokens in your text, then multiplies that by the cost per token. The cost per token varies depending on the model, with gpt-3.5-turbo being cheaper than others. So remember to choose your model wisely!

Thatโ€™s all for this topic! Now you can estimate the cost of your OpenAI requests without breaking a sweat.

Remember: Count your tokens, and save your pennies!

LLM Generic Function in LLM Model

What if ChatGPT is Down? ๐Ÿ˜ฑ

In todayโ€™s digital landscape, thousands of tools, services, scripts, and courses, like this one, are built with the OpenAI API. These utilities leverage ChatGPT and GPT-4 or any other LLM model for AI text generation. Thus, itโ€™s not simple to assume that OpenAI will shut down.

However, letโ€™s hypothesize a scenario where OpenAI decides to close API access, impose strict conditions, or significantly increase the prices to a point where it becomes unaffordable for many. Would this course and others like it become obsolete in such a case? 

The answer is a simple NO!

Simple Solution!

To overcome such a potential problem, I have developed a script that can generate text using any Language Model, meaning we are not solely reliant on OpenAI.

This solution allows you to choose any API you want, or even use your own if you have one, to generate text. Let me demonstrate this in the following short video:

This flexible approach is not just a backup plan if ChatGPT stops working, but it also makes our tools and scripts more adaptable, letting everyone choose the language model that fits their needs best!

Here is the full script:

import openai
import nlpcloud
import cohere

"""
LLMs:
OpenAI
NlpCloud
Cohere
You can add your own
"""


def llm_generate_text(prompt,service,model):
    if service == 'OpenAI':
        generated_text = openai_generate(prompt,model)
    elif service == 'NlpCloud':
        generated_text = nlp_cloud_generate(prompt,model)
    elif service == 'Cohere':
        generated_text = cohere_generate(prompt,model)
    
    return generated_text






#Open AI Function
openai.api_key = "sk-beCoSmKAoAP60SFpjc4sdfkSL"
def openai_generate(user_prompt,selected_model):
    completion = openai.ChatCompletion.create(
        model=selected_model,
        messages=[
            {"role": "user", "content": user_prompt}
        ]
    )
    return completion.choices[0].message.content

#nlpCloud Function
nlp_cloud_key = "f1720b3bc2102ddf9"
def nlp_cloud_generate(user_prompt,selected_model):
    client = nlpcloud.Client(selected_model, nlp_cloud_key, gpu=True, lang="en")
    result = client.generation(
    user_prompt,
    min_length=0,
    max_length=100,
    length_no_input=True,
    remove_input=True,
    end_sequence=None,
    top_p=1,
    temperature=0.8,
    top_k=50,
    repetition_penalty=1,
    length_penalty=1,
    do_sample=True,
    early_stopping=False,
    num_beams=1,
    no_repeat_ngram_size=0,
    num_return_sequences=1,
    bad_words=None,
    remove_end_sequence=False
)
    return result["generated_text"]

#Cohere API
cohere_api_key = "fk8B74dEf1DlsdzIGkA4lL"
def cohere_generate(user_prompt,selected_model):
    co = cohere.Client(cohere_api_key) # This is your trial API key
    response = co.generate(
        model=selected_model,
        prompt=user_prompt,
        max_tokens=300,
        temperature=0.9,
        k=0,
        stop_sequences=[],
        return_likelihoods='NONE')
    
    return response.generations[0].text

Prompt Templates

In this section, we will talk about a big bang: prompt templates.

Why big? Because prompt templates can significantly improve how you work with your AI model. And, believe me, thatโ€™s big! Letโ€™s dive right in!

In this section, you will learn how to organize and use prompts easily in your Python script.

What are Prompt Templates?

Prompt templates are like skeletons for your prompts. Theyโ€™re a predefined structure that you fill in with the specific details for each prompt.

For instance, if youโ€™re asking a model to translate English to French, you could use a template like

"Translate the following English text to French: '{}'", and replace the '{}' with the text you want translated.

Why use templates? They ensure that your model gets the right context to understand what youโ€™re asking.

Remember, AI models are a bit like toddlers: they need clear and consistent instructions! ๐Ÿผ

Letโ€™s Start

First, we are going to create a basic prompts structure. We are going to create a new Python script file called prompts.py

letโ€™s add our first prompts, and copy the following into the prompts.py file.

twitter_thread_generator_prompt = """Act as if you're a social media expert.
Give me a 10 tweet thread about [{topic}].
The thread should be optimised for virality and contain hashtags and emoticons.
Each tweet should not exceed 280 characters in length."""

blog_bullet_summary_prompt = """\
In the following input, I am going to give you a text you should summarize 
for me a bullet points format.
I will give you a maximum and a minimum amount of bullet points
you should use for the summary.
I am also going to give you the text itself after that.
The language of the text you get should define in which language you write the summary.
For Example, if the text is German the summary should be in German too.
This goes for every language.While writing the summary 
I want you to remember the following information: When it comes to writing content,
two factors are crucial, "perplexity" and "burstiness."
Perplexity measures the complexity of text. Separately,
burstiness compares the variations of sentences.
Humans tend to write with great burstiness, for example,
with some longer or complex sentences alongside shorter sentences.
AI sentences tend to be more uniform. Therefore,
when writing the following content I am going to ask you to create,
I need it to have a good amount of perplexity and burstiness.
Keep that information in mind while writing.
Maximum: [{MaxPoints}], Minimum[{MinPoints}], Text: {InputText} """

As you can see, we added two prompts, one for creating Twitter threads and the other for summarizing a blog post.

How to use the prompts in our scripts?

I created a short video demonstrating how to import the prompts.py into your script and use it with the text generation function we created earlier.

Here is the video: Watch Video

Organizing prompts into categories

As we saw in the video, we have a problem which is managing a large number of prompts in different categories. Letโ€™s Solve this!

I will create a folder in our project called โ€œprompt_templatesโ€œ

And inside this folder, I will create a Python script file for each category.

For now, we will create 3 categories to demonstrate this. And you can add as much as you want depending on your scenarios.

And I will move the prompts to the appropriate script files.

Now letโ€™s do some magic and see how we access prompts easily from each file. 

Check out this video: Watch the Video

Prompt templates are important because they provide consistency and clarity, which can significantly improve the modelโ€™s responses. A good template makes you more likely to get the desired result.