After a somewhat rough start, Google seems to be back in the AI race. With the release of the PaLM 2 AI model, Google has shown that it can develop capable AI models with great efficiency and speed. And now, you can try out the PaLM 2 AI model right away using the Vertex AI platform. In this article, we have included the step-by-step process to set up and use the Google PaLM 2 model. On that note, let’s jump right in.
Step 1: Set Up Google Cloud Account
1. First of all, go to cloud.google.com/vertex-ai and click on the “Try Vertex AI free” button.
2. Next, sign in with your Google account and fill out other details.
3. After that, add a payment method right below. Do not worry, you won’t be charged until your free credit is exhausted. It will only verify your card with a minuscule amount.
4. Once done, you will get free Google Cloud credits worth Rs. 24,531 (~$300).
Step 2: Access the Vertex AI Platform
1. Now that you have created your Google Cloud account, type “Vertex AI” in the search box at the top.
2. Here, select “Vertex AI“.a under the “Products & Pages” section.
3. After that, click on “Language” under “Generative AI Studio” in the left menu.
3. Here, click on the “Create Chat Prompt” option. This will allow you to chat with Google’s PaLM 2 AI model.
4. Finally, enter a prompt and hit Enter. You will likely get an error since you have not enabled the API just yet.
5. In the pop-up box, click the “Enable” button to access the Vertex AI API. If you have not added any card, it will ask you to add a payment method and create a billing account.
6. Finally, the Vertex AI API will be enabled.
Step 3: Use the Google PaLM 2 AI Model
1. Now, go ahead and try out a chatbot based on the PaLM 2 AI model right away. The UI here looks similar to the OpenAI Playground website, which is one of the best ChatGPT alternatives.
2. On the left, you can set the context similar to OpenAI Playground. You can ask the PaLM 2 AI model to pretend to be a doctor, coder, stock analyst, or basically anything you want.
3. And on the right, you can choose the model (currently it only offers the PaLM chat-bison@001 model). Unicorn, the largest PaLM 2 AI model, is still not available for evaluation. Apart from that, you can adjust the temperature, token limit, Top-K, and Top-P values.
4. We asked about its foundation model, but it kept repeating that it’s built on Google’s older LaMDA model. At least, now, it does not say that PaLM 2 is built by OpenAI, as Anmol from our team encountered before.
5. Furthermore, we asked the PaLM 2 AI model to find a bug in the code, and it correctly found the error and fixed the code. We also asked it to solve a riddle and it responded with the correct answer.
PaLM 2 (Bison) AI Model: First Impressions
I am quite amazed after interacting with the PaLM 2 AI model. This is not even the largest Unicorn model. I am chatting with the smaller Bison AI model, and it has so far shown great promise. I threw many coding-related doubts and some tough reasoning questions at this chatbot, and it excelled at all levels.
To test hallucination, I asked many questions about fictional places and events and it seems PaLM 2 has been tamed well from speaking outright lies. In my testing, it gave correct answers pointing to fictional events. In some cases, it simply refused to answer questions where the question was itself fictional. In some other queries, it did hallucinate but it’s common with AI models at the moment.
Apart from that, the best part is that PaLM 2 has been trained up to February 2023, unlike several OpenAI models where the cut-off date is September 2021. In addition, the PaLM 2-derived Bison model has a maximum input token of 4096 and a maximum output token of 1024.
Sure, it can’t match up with GPT-4’s 8192 and 32k context length, but for most AI-based applications, PaLM 2 is cheaper and faster. To sum up, PaLM 2 is being regularly fine-tuned and updated (the latest being 10th May 2023), and for developers, I think Google’s PaLM 2 AI model might be a better offering than the competition.