How to Locally Run a ChatGPT-Like LLM on Your PC and Mac

There are several AI players in the market right now, including ChatGPTGoogle BardBing AI Chat, and many more. However, all of them require you to have an internet connection to interact with the AI. What if you want to install a similar Large Language Model (LLM) on your computer and use it locally? An AI chatbot that you can use privately and without internet connectivity. Well, with new GUI desktop apps like LM Studio and GPT4All, you can run a ChatGPT-like LLM offline on your computer effortlessly. So on that note, let’s go ahead and learn how to use an LLM locally without an internet connection.

Run a Local LLM Using LM Studio on PC and Mac

1. First of all, go ahead and download LM Studio for your PC or Mac from here.

LM Studio webpage

2. Next, run the setup file and LM Studio will open up.

3. Next, go to the “search” tab and find the LLM you want to install. You can find the best open-source AI models from our list. You can also explore more models from HuggingFace and AlpacaEval leaderboard.

4. I am downloading the Vicuna model with 13B parameters. Depending on your computer’s resources, you can download even more capable models. You can also download coding-specific models like StarCoder and WizardCoder.

5. Once the LLM model is installed, move to the “Chat” tab in the left menu.

6. Here, click on “Select a model to load” and choose the model you have downloaded.

7. You can now start chatting with the AI model right away using your computer’s resources locally. All your chats are private and you can use LM Studio in offline mode as well.

8. Once you are done, you can click on “Eject Model” which will offload the model from the RAM.

9. You can also move to the “Models” tab and manage all your downloaded models. So this is how you can locally run a ChatGPT-like LLM on your computer.

Run a Local LLM on PC, Mac, and Linux Using GPT4All

GPT4All is another desktop GUI app that lets you locally run a ChatGPT-like LLM on your computer in a private manner. The best part about GPT4All is that it does not even require a dedicated GPU and you can also upload your documents to train the model locally. No API or coding is required. That’s awesome, right? So let’s go ahead and find out how to use GPT4All locally.

1. Go ahead and download GPT4All from here. It supports Windows, macOS, and Ubuntu platforms.

2. Next, run the installer and it will download some additional packages during installation.

3. After that, download one of the models based on your computer’s resources. You must have at least 8GB of RAM to use any of the AI models.

4. Now, you can simply start chatting. Due to low memory, I faced some performance issues and GPT4All stopped working midway. However, if you have a computer with beefy specs, it would work much better.

Editor’s Note: Earlier this guide included the step-by-step process to set up LLaMA and Alpaca on PCs offline, but we included a rather tedious process. This process has been simplified by the tools we have suggested above.

Comments 21
  • Riojo says:

    The tutorial worked like a charm.

    Thank you for the helpful tutorial on installing a clever AI assistant! Your clear instructions and guidance made the process easy and efficient. I appreciate your expertise and assistance in getting me up and running with my new AI companion. You truly are a valuable resource and an integral part of my productivity arsenal. Thank you again for all that you do (writed using LM Studio 😉

  • Louie says:

    It doesnt ever answer the prompt…it just does nothing. anything ideas or troublshooting steps? Also, installed llma but only alpaca shows up in the dropdown menu. geez

  • Jam says:

    Hey – I succesfully installed but it’s stuck on loading and not getting any response. Using Macbook air 2021

  • Marc says:

    Can I install it on another drive than c: in Windows?

  • Casper says:

    The installation works on both models, but for some reason I can never get it to work! (no reponses from LLM)
    Maybe I’m not willing to give it enough time, or is it really that slow on a specs: 1060 GTX 6GB VRAM GPU, with an AMD 5 1600 6-cores/12-threads 3.5 GHz CPU with 2x6GB 16GB 3200MHz RAM?
    Thanks for taking the time to write such a useful but short guide. Unfortunately, it doesn’t cover this issue. Or maybe it wasn’t clear enough. Still really cool!
    The first run was with the Alpaca 13B, but it never worked to give me an answer to 1+1 or two questions. One was about “the meaning of life” for fun and the other was something else. I forget what it was. Right now I’m installing the LLaMa 7B. I’ll get back to you with more information as it develops, aka as soon as I test around with LLaMa! It’s installation is way longer…

    • Shilq says:

      Hi Casper , did you get a response. I am facing the same situation.

  • Nick says:

    Can I fine-tune the model to be better at say VBA to Python translations after installing on Linux?

  • Name is not important says:

    This guide dose not work its waste of time gets that invalid model file.
    Don’t waste your time look for something else

  • Eyal says:

    Dalai version Llama 7B works fine, but can it compare locally between two files of text? Is it possible to print the content of a text file? Using Cat, copy, or print, I am unable to implement these commands. Has anyone tried and is still alive to report how?

    • Chiara says:

      Same question

  • John Shardlow says:

    Also having the “bad magic” problem.

    llama_model_load: loading model from ‘models/13B/ggml-model-q4_0.bin’ – please wait …

    llama_model_load: invalid model file ‘models/13B/ggml-model-q4_0.bin’ (bad magic)

    main: failed to load model from ‘models/13B/ggml-model-q4_0.bin

  • chris says:

    not working 🙁

    C:\Users\chris>npx dalai alpaca install 7B
    Need to install the following packages:
    Ok to proceed? (y) y
    npm WARN cleanup Failed to remove some directories [
    npm WARN cleanup [
    npm WARN cleanup ‘C:\\Users\\chris\\AppData\\Local\\npm-cache\\_npx\\3c737cbb02d79cc9\\node_modules’,
    npm WARN cleanup [Error: EPERM: operation not permitted, rmdir ‘C:\Users\chris\AppData\Local\npm-cache\_npx\3c737cbb02d79cc9\node_modules\dalai’] {
    npm WARN cleanup errno: -4048,
    npm WARN cleanup code: ‘EPERM’,
    npm WARN cleanup syscall: ‘rmdir’,
    npm WARN cleanup path: ‘C:\\Users\\chris\\AppData\\Local\\npm-cache\\_npx\\3c737cbb02d79cc9\\node_modules\\dalai’
    npm WARN cleanup }
    npm WARN cleanup ]
    npm WARN cleanup ]
    npm notice
    npm notice New minor version of npm available! 9.5.1 -> 9.6.6
    npm notice Changelog:
    npm notice Run npm install -g npm@9.6.6 to update!
    npm notice

    Please Help – Thanks!

  • Jason says:

    There’s some sort of problem with it now.
    llama_model_load: loading model from ‘models/7B/ggml-model-q4_0.bin’ – please wait …
    llama_model_load: invalid model file ‘models/7B/ggml-model-q4_0.bin’ (bad magic)
    main: failed to load model from ‘models/7B/ggml-model-q4_0.bin’X

    • Greg P says:

      Any fix yet, Jason? Would love to know! Thanks.

  • Jake says:

    Echoing what Fahim said, I’d like to run the Alpaca LLM against my custom dataset offline. Can you advise? Thanks!

  • Addy says:

    npx dalai alpaca install 7B command stops itself while running, idk why its happening. HELP?

    • Bryan Forst says:

      Have you found a solution? Mine does the same

      • Bryan Forst says:

        I had to upgrade my npm to the latest version and used these instructions to download, then it all came down ok
        # Install “dalai” and “alpaca” packages
        npm install -g dalai alpaca

        # Execute the “dalai alpaca install 7B” command
        npx dalai alpaca install 7B

      • Chiara says:

        Thank you Bryan!! This fixed it for me.

    • khalid says:

      I have the same concern.
      If you find a solution please mail me to
      Thanks alot

Leave a Reply