Show HN: Panza: A personal email assistant, trained and running on-device

https://github.com/IST-DASLab/PanzaMail

panza demo

Panza: A personal email assistant, trained and running on-device

Open In Studio

What is Panza?

Panza is an automated email assistant customized to your writing style and past email history.
Its main features are as follows:

  • Panza produces a fine-tuned LLM that matches your writing style, pairing it with a Retrieval-Augmented Generation (RAG) component which helps it produce relevant emails.
  • Panza can be trained and run entirely locally. Currently, it requires a single GPU with 16-24 GiB of memory, but we also plan to release a CPU-only version. At no point in training or execution is your data shared with the entities that trained the original LLMs, with LLM distribution services such as Huggingface, or with us.
  • Training and execution are also quick - for a dataset on the order of 1000 emails, training Panza takes well under an hour, and generating a new email takes a few seconds at most.

panza logo

Prerequisites

  • Your emails, exported to mbox format (see tutorial below).
  • A computer, preferably with a NVIDIA GPU with at least 24 GiB of memory (alternatively, check out running in Google Colab).
  • A Hugging Face account to download the models (free of charge).
  • [Optional] A Weights & Biases account to log metrics during training (free of charge).
  • Basic Python and Unix knowledge, such as building environments and running python scripts.
  • No prior LLMs experience is needed.

How it works

📽️ Step 1: Data playback

For most email clients, it is possible to download a user's past emails in a machine-friendly .mbox format. For example, GMail allows you to do this via Google Takeout, whereas Thunderbird allows one to do this via various plugins.

One key part of Panza is a dataset-generation technique we call data playback: Given some of your past emails in .mbox format, we automatically create a training set for Panza by using a pretrained LLM to summarize the emails in instruction form; each email becomes a (synthetic instruction, real email) pair. Given a dataset consisting of all pairs, we use these pairs to "play back" your sent emails: the LLM receives only the instruction, and has to generate the "ground truth" email as a training target.

We find that this approach is very useful for the LLM to "learn" the user's writing style.

🏋️ Step 2: Local Fine-Tuning via Robust Adaptation (RoSA)

We then use parameter-efficient finetuning to train the LLM on this dataset, locally. We found that we get the best results with the RoSA method, which combines low-rank (LoRA) and sparse finetuning. If parameter efficiency is not a concern, that is, you have a more powerful GPU, then regular, full-rank/full-parameter finetuning can also be used. We find that a moderate amount of further training strikes the right balance between matching the writer's style without memorizing irrelevant details in past emails.

🦉 Step 3: Serving via RAG

Once we have a custom user model, Panza can be run locally together with a Retrieval-Augmented Generation (RAG) module. Specifically, this functionality stores past emails in a database and provides a few relevant emails as context for each new query. This allows Panza to better insert specific details, such as a writer's contact information or frequently used Zoom links.

The overall structure of Panza is as follows:

panza logo

Installation

Environment.

We tested Panza using python 3.10. If you are running a different version, you can either install it directly or, for instance, using miniconda:

conda create -n panza python=3.10 -y
conda activate panza

Then, Install the required packages:

If you want to also finetune models using Panza, you will need to install additional packages:

🚀 Getting started

To quickly get started with building your own personalized email assistant, follow the steps bellow:

Step 0: Download your sent emails

Expand for detailed download instructions.

We provide a description for doing this for GMail via Google Takeout.

  1. Go to https://takeout.google.com/.
  2. Click Deselect all.
  3. Find Mail section (search for the phrase Messages and attachments in your Gmail account in MBOX format).
  4. Select it.
  5. Click on All Mail data included and deselect everything except Sent.
  6. Scroll to the bottom of the page and click Next step.
  7. Click on Create export.
  8. Wait for download link to arrive in your inbox.
  9. Download Sent.mbox and place it in the data/ directory.

For Outlook accounts, we suggest doing this via a Thunderbird plugin for exporting a subset of your email as an MBOX format, such as this add-on.

At the end of this step you should have the downloaded emails placed inside data/Sent.mbox.

Step 1: Environment configuration

Panza is configured through a set of yaml configurations defined in configs/. There is a single high-level config under configs/base.yaml, and the rest are organized under the main functionalities of the code. Note that these task-specific configs can, in some cases, be used to override base configs. Specific use cases, such as hyperparameter tuning, are covered in more detail in scripts/README.md.

  1. Data preparation: configs/data_preparation.yaml. Additionally, a custom user config must be created under config/users/ (see below).
  2. Finetuning: the main config is in configs/panza_finetuning.yaml and the method-specific ones are in configs/finetuning/
  3. Serving: Serving consists of two parts - a serving infrastructure (that we call 'writer') that runs the LLM and so converts prompts to Panza outputs, and an interface, which presents the outputs in a useful form - through a command-line interface, a web interface, a gmail client, or in a bulk .json format (useful for evaluation). The configs for serving are in panza_writer.yaml, and for the interfaces, under configs/interfaces.

These scripts are described in more detail in scripts/README.md, but a few customizations need to happen immediately. :warning: Before continuing, make sure you complete the following setup:

  • Perform the following modifications on users/default.yaml directly. If running Panza for multiple users, copy this file to, for example, users/jen.yaml and specify the user in Panza training commands.
  • In the user config, set the email address and username. The email address should be the sender address in the exported emails. (Panza uses this to edit out responses and other emails sent by a different author in the .mbox dump.). The username does not have to link to the email itself - it is simply used as a name for the various data files that will come out of the data preparation process. A handy way to set this is if you set it to be the output of the whoami call in your shell.
  • Modify the personal prompt in prompt_preambles/user_preamble.txt to include some basic information about yourself that Panza can use to customize your emails with your correct full name, address, phone number, etc.

Additionally, please perform the following login steps to be able to download the base model.

  • Login to Hugging Face to be able to download pretrained models: huggingface-cli login.
  • [Optional] Login to Weights & Biases to log metrics during training: wandb login. Then, set wandb_disabled=false in configs/finetuning/base.yaml.

You are now ready to move to scripts.

Step 2: Extract emails

Run CUDA_VISIBLE_DEVICES=X ./prepare_data.sh.

This scripts takes care of all the prerequisites before training (expand for details).
- Extracts your emails in text format to `data/<username>_clean.jsonl` which you can manually inspect.
- Creates synthetic prompts for your emails as described in the [data playback](#film_projector-step-1-data-playback) section. The results are stored in `data/<username>_clean_summarized.jsonl` and you can inspect the `"summary"` field.
- Splits data into training and test subsets. See `data/train.jsonl` and `data/test.jsonl`.
- Creates a vector database from the embeddings of the training emails which will later be used for *Retrieval-Augmented Generation (RAG)*. See `data/<username>.pkl` and `data/<username>.faiss`.

NB: if you did not change the default configuration in user/default.yaml to reflect your particulars but rather created a new file, you need to add the additional flag to the above command where you specify user=x where your config file was named x.yaml.

FAQs. When running the above script, you may encounter an OutOfMemoryError. If this is the case, you can either:
  1. Reduce the batch size for the data processing step. This can be found in configs/panza_preparation.yaml.
  2. Move to a machine that has more memory.

Step 3: Train a LLM on your emails

We currently support LLaMA3-8B-Instruct and Mistral-Instruct-v0.2 LLMs as base models; the former is the default, but we obtained good results with either model.

  1. [Recommended] For parameter efficient fine-tuning, run ./train_rosa.sh.
    If a larger GPU is available and full-parameter fine-tuning is possible, run ./train_fft.sh.

  2. We have prepopulated the training configs with parameter values that worked best for us. We recommend you try those first, but you can also experiment with different hyper-parameters by passing extra arguments to the training script, such as lr, lora_lr, num_epochs. All the trained models are saved in the checkpoints directory.

Examples:

CUDA_VISIBLE_DEVICES=X ./train_rosa.sh                                   # Will use the default parameters.
CUDA_VISIBLE_DEVICES=X ./train_rosa.sh finetuning.lr=1e-6 finetuning.rosa_lr=1e-6 finetuning.max_duration=7ep

On a smaller GPU, it may be necessary to further train in lower precision (QRoSA). This can be run as follows:

./train_rosa.sh finetuning.precision=amp_bf16 finetuning.model.weight_bias_dtype=4bit
FAQs. The bash scripts that are used to execute the finetuning procedure assume by default that your username is what is returned by the whoami command. This is used to locate the name of the user configs inside the configs/user directory as above. If you directly modified default.yaml, or created another yaml file where the name of that file does not match with the output of whoami, there will be an error. This is an easy fix. You can either:
  1. Change the name of the yaml file to be the output of whoami.
  2. You can override the username manually when you launch the bash script by adding user=x where x is the name of the yaml file you created. For example: ./train_rosa.sh user=alonso

If you wish to add CUDA_VISIBLE_DEVICES to specify a specific GPU, please add this in the shell script directly by export CUDA_VISIBLE_DEVICES=x where x is the ID of the GPU you wish to use.

A known issue is that when you fine-tune your model with RAG, there can be a case when the tokenization of the dataset seemingly hangs. This is due to a known bug with with HF's map function where n_proc>1. To alleviate this issue, you can set torch.set_num_threads(1) in src/panza/finetuning/train.py or set the equivalent parameter in configs/finetuning/rosa.yaml.

Step 5: Launch Panza!

  • To run Panza after a full training run, run a command like CUDA_VISIBLE_DEVICES=0 ./runner.sh user=USERNAME interfaces=cli writer/llm=transformers checkpoint=latest.
  • To run Panza after a RoSA or LoRA training run, replace writer/llm=transformers with writer/llm=peft

🆕 Use Panza in Google Chrome directly with your Gmail!

In addition to the Panza package itself, we have also created a tool that will allow you to use Panza directly within your Gmail session. We have published this extension on Google Chrome here. Here is a written guide on how to get this setup below.

  • Launch the Panza web server: Instead of using the cli as an interface above, we execute the following command: CUDA_VISIBLE_DEVICES=0 API_KEYS=panza_beta ./runner.sh user=USERNAME interfaces=web writer/llm=peft checkpoint=latest.
    1. We have to choose an API key that the server will use. Since the browser extension we have created is a beta release, the API_KEY by default is panza_beta.
    2. Executing this script spins up a web server on port 5001 by default. The port can be changed in the configs/interfaces/web.json file. However, our browser extension sends API requests to localhost:5001 only in this beta version.
  • [Optionally add port forwarding] If you are not running the Panza web server on the same device where Google Chrome is installed, you will be unable to make requests to a server with a reference to localhost. To correctly use the server, you will have to enable port forwarding from the remote machine to your local device. This is done by VSCode automatically if you are SSH'ed into a remote server, and spin up Panza there.
  • Install the Google Chrome extension here. Now we that we have setup all the necessary pieces to use Panza, you can use it directly within your Gmail. To do so, simply write a prompt in the main message box, and click the Panza icon in the tool bar (as seen in the GIF below), and let Panza take care of the rest!

📧 Have fun with your new email writing assistant! 📧

🔬 Advanced usage

👩‍💻 Contributing

If you liked our work and want to contribute to improve the system, please feel free to do so! Make a fork of our repository and once you have made your changes, submit a pull request so that we can review!

One thing to mention: we want to make sure that we all adhere to the same coding standards, so we have added Black, a code formatter, as a prehook. To ensure that all your files are formatted with Black, do the following:

  1. Install the necessary dependencies
pip install .[contributing]
  1. Run the precommit command
  1. Continue adding code as usual. All your code will be formatted by Black before commiting!

Privacy Statement

The goal of Panza is to give users full control of their data and models trained on it. As such, no part of Panza, including the Chrome/GMail plugin collects any information about its users, outside of the normal summary statistics collected by Github and Google (such as the number of stars/forks/downloads). If you choose to run any part of Panza on a hosted service, e.g., on Amazon Web Services or Google Colab, we take no responsibility for any data collection or data breaches that may occur. Additionally, running the Panza web client or the GUI interface (via Gradio) risks providing unauthorized access to the models. Please use at your own risk.

Authors

Panza was conceived by Nir Shavit and Dan Alistarh and built by the Distributed Algorithms and Systems group at IST Austria. The contributors are (in alphabetical order):

Dan Alistarh, Eugenia Iofinova, Andrej Jovanovic, Eldar Kurtic, Ilya Markov, Armand Nicolicioiu, Mahdi Nikdan, Andrei Panferov, Nir Shavit, and Sean Yang.

Contact: [email protected]

We thank our collaborators Michael Goin and Tony Wang at NeuralMagic and MIT for their helpful testing and feedback.

{
"by": "eldar_ciki",
"descendants": 5,
"id": 40241544,
"kids": [
40251491,
40256181,
40249091
],
"score": 49,
"text": "Tired of crafting well-polished emails and wish you had an assistant to take over the hard work while mimicking your writing style? Introducing Panza, a personalized LLM email assistant that runs entirely on your device! Choose between Llama-3 or Mistral, tailor it to your unique style, and let it write the emails for you. Take a look at our demo and give it a try on your emails at: <a href=\"https:&#x2F;&#x2F;github.com&#x2F;IST-DASLab&#x2F;PanzaMail\">https:&#x2F;&#x2F;github.com&#x2F;IST-DASLab&#x2F;PanzaMail</a><p>Some technical details about Panza:<p>- Panza is an automated email assistant customized to your writing style and past email history.<p>- Panza produces a fine-tuned LLM that matches your writing style, pairing it with a Retrieval-Augmented Generation (RAG) component which helps it produce relevant emails.<p>- Panza *can be trained and run entirely locally*. Currently, it requires a single GPU with 16-24 GiB of memory, but we also plan to release a CPU-only version.<p>- Training and execution are also quick - for a dataset on the order of 1000 emails, training Panza takes well under an hour, and generating a new email takes a few seconds at most.",
"time": 1714685264,
"title": "Show HN: Panza: A personal email assistant, trained and running on-device",
"type": "story",
"url": "https://github.com/IST-DASLab/PanzaMail"
}
{
"author": "IST-DASLab",
"date": null,
"description": "Contribute to IST-DASLab/PanzaMail development by creating an account on GitHub.",
"image": "https://opengraph.githubassets.com/3e7e58d77fca2ad1379abfccebbdf284613f938ad6dd12a110ce3a36f052b5bc/IST-DASLab/PanzaMail",
"logo": "https://logo.clearbit.com/github.com",
"publisher": "GitHub",
"title": "GitHub - IST-DASLab/PanzaMail",
"url": "https://github.com/IST-DASLab/PanzaMail"
}
{
"url": "https://github.com/IST-DASLab/PanzaMail",
"title": "GitHub - IST-DASLab/PanzaMail",
"description": "Panza: A personal email assistant, trained and running on-device What is Panza? Panza is an automated email assistant customized to your writing style and past email history. Its main features are as...",
"links": [
"https://github.com/IST-DASLab/PanzaMail"
],
"image": "https://opengraph.githubassets.com/3e7e58d77fca2ad1379abfccebbdf284613f938ad6dd12a110ce3a36f052b5bc/IST-DASLab/PanzaMail",
"content": "<div><article><p><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail/blob/main/panza_logo.png\"><img src=\"https://github.com/IST-DASLab/PanzaMail/raw/main/panza_logo.png\" alt=\"panza demo\" /></a>\n</p>\n<p></p><h2>Panza: A personal email assistant, trained and running on-device</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#panza-a-personal-email-assistant-trained-and-running-on-device\"></a><p></p>\n<a target=\"_blank\" href=\"https://lightning.ai/maddox-j/studios/panzamail-demo\">\n <img src=\"https://camo.githubusercontent.com/e83f8d7b7c6f1aff027a8c47673131f7fa516b1801f9d97acbfb5f30748e3045/68747470733a2f2f706c2d626f6c74732d646f632d696d616765732e73332e75732d656173742d322e616d617a6f6e6177732e636f6d2f6170702d322f73747564696f2d62616467652e737667\" alt=\"Open In Studio\" />\n</a>\n<p></p><h2>What is Panza?</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#what-is-panza\"></a><p></p>\n<p>Panza is an automated email assistant customized to your writing style and past email history. <br />\nIts main features are as follows:</p>\n<ul>\n<li>Panza produces a fine-tuned LLM that matches your writing style, pairing it with a Retrieval-Augmented Generation (RAG) component which helps it produce relevant emails.</li>\n<li>Panza <strong>can be trained and run entirely locally</strong>. Currently, it requires a single GPU with\n16-24 GiB of memory, but we also plan to release a CPU-only version. <strong>At no point in training or execution is your data shared with the entities that trained the original LLMs, with LLM distribution services such as Huggingface, or with us.</strong></li>\n<li>Training and execution are also quick - for a dataset on the order of 1000 emails, training Panza takes well under an hour, and generating a new email takes a few seconds at most.</li>\n</ul>\n<p><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail/blob/main/panza_demo.gif\"><img src=\"https://github.com/IST-DASLab/PanzaMail/raw/main/panza_demo.gif\" alt=\"panza logo\" /></a>\n</p>\n<p></p><h2>Prerequisites</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#prerequisites\"></a><p></p>\n<ul>\n<li>Your emails, exported to <code>mbox</code> format (see tutorial below).</li>\n<li>A computer, preferably with a NVIDIA GPU with at least 24 GiB of memory (alternatively, check out <a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#cloud-try-out-panza-in-google-colab\">running in Google Colab</a>).</li>\n<li>A Hugging Face <a target=\"_blank\" href=\"https://huggingface.co/login\">account</a> to download the models (free of charge).</li>\n<li>[Optional] A Weights &amp; Biases <a target=\"_blank\" href=\"https://wandb.ai/login\">account</a> to log metrics during training (free of charge).</li>\n<li>Basic Python and Unix knowledge, such as building environments and running python scripts.</li>\n<li><em>No prior LLMs experience is needed</em>.</li>\n</ul>\n<p></p><h2>How it works</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#how-it-works\"></a><p></p>\n<p></p><h3>📽️ Step 1: Data playback</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#film_projector-step-1-data-playback\"></a><p></p>\n<p>For most email clients, it is possible to download a user's past emails in a machine-friendly .mbox format. For example, GMail allows you to do this via <a target=\"_blank\" href=\"https://takeout.google.com/\">Google Takeout</a>, whereas Thunderbird allows one to do this via various plugins.</p>\n<p>One key part of Panza is a dataset-generation technique we call <strong>data playback</strong>: Given some of your past emails in .mbox format, we automatically create a training set for Panza by using a pretrained LLM to summarize the emails in instruction form; each email becomes a <code>(synthetic instruction, real email)</code> pair.\nGiven a dataset consisting of all pairs, we use these pairs to \"play back\" your sent emails: the LLM receives only the instruction, and has to generate the \"ground truth\" email as a training target.</p>\n<p>We find that this approach is very useful for the LLM to \"learn\" the user's writing style.</p>\n<p></p><h3>🏋️ Step 2: Local Fine-Tuning via Robust Adaptation (RoSA)</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#weight_lifting-step-2-local-fine-tuning-via-robust-adaptation-rosa\"></a><p></p>\n<p>We then use parameter-efficient finetuning to train the LLM on this dataset, locally. We found that we get the best results with the <a target=\"_blank\" href=\"https://arxiv.org/pdf/2401.04679.pdf\">RoSA method</a>, which combines low-rank (LoRA) and sparse finetuning. If parameter efficiency is not a concern, that is, you have a more powerful GPU, then regular, full-rank/full-parameter finetuning can also be used. We find that a moderate amount of further training strikes the right balance between matching the writer's style without memorizing irrelevant details in past emails.</p>\n<p></p><h3>🦉\tStep 3: Serving via RAG</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#owlstep-3-serving-via-rag\"></a><p></p>\n<p>Once we have a custom user model, Panza can be run locally together with a Retrieval-Augmented Generation (RAG) module. Specifically, this functionality stores past emails in a database and provides a few relevant emails as context for each new query. This allows Panza to better insert specific details, such as a writer's contact information or frequently used Zoom links.</p>\n<p>The overall structure of Panza is as follows:</p>\n<p><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail/blob/main/panza_diagram.png\"><img src=\"https://github.com/IST-DASLab/PanzaMail/raw/main/panza_diagram.png\" alt=\"panza logo\" /></a>\n</p>\n<p></p><h2>Installation</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#installation\"></a><p></p>\n<p></p><h3>Environment.</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#environment\"></a><p></p>\n<p>We tested Panza using python 3.10. If you are running a different version, you can either install it directly or, for instance, using <a target=\"_blank\" href=\"https://docs.anaconda.com/free/miniconda/miniconda-install/\">miniconda</a>:</p>\n<div><pre>conda create -n panza python=3.10 -y\nconda activate panza</pre></div>\n<p>Then, Install the required packages:</p>\n<p>If you want to also finetune models using Panza, you will need to install additional packages:</p>\n<p></p><h2>🚀 Getting started</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#rocket-getting-started\"></a><p></p>\n<p>To quickly get started with building your own personalized email assistant, follow the steps bellow:</p>\n<p></p><h3>Step 0: Download your sent emails</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#step-0-download-your-sent-emails\"></a><p></p>\n<details>\n <summary> Expand for detailed download instructions.</summary>\n<p>We provide a description for doing this for GMail via Google Takeout.</p>\n<ol>\n<li>Go to <a target=\"_blank\" href=\"https://takeout.google.com/\">https://takeout.google.com/</a>.</li>\n<li>Click <code>Deselect all</code>.</li>\n<li>Find <code>Mail</code> section (search for the phrase <code>Messages and attachments in your Gmail account in MBOX format</code>).</li>\n<li>Select it.</li>\n<li>Click on <code>All Mail data included</code> and deselect everything except <code>Sent</code>.</li>\n<li>Scroll to the bottom of the page and click <code>Next step</code>.</li>\n<li>Click on <code>Create export</code>.</li>\n<li>Wait for download link to arrive in your inbox.</li>\n<li>Download <code>Sent.mbox</code> and place it in the <code>data/</code> directory.</li>\n</ol>\n<p>For Outlook accounts, we suggest doing this via a Thunderbird plugin for exporting a subset of your email as an MBOX format, such as <a target=\"_blank\" href=\"https://addons.thunderbird.net/en-us/thunderbird/addon/importexporttools-ng/\">this add-on</a>.</p>\n</details>\n<p>At the end of this step you should have the downloaded emails placed inside <code>data/Sent.mbox</code>.</p>\n<p></p><h3>Step 1: Environment configuration</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#step-1-environment-configuration\"></a><p></p>\n<p>Panza is configured through a set of yaml configurations defined in <code>configs/</code>. There is a single high-level config under <code>configs/base.yaml</code>, and the rest are organized under the main functionalities of the code.\nNote that these task-specific configs can, in some cases, be used to override base configs.\nSpecific use cases, such as hyperparameter tuning, are covered in more detail in <code>scripts/README.md</code>.</p>\n<ol>\n<li>Data preparation: <code>configs/data_preparation.yaml</code>. Additionally, a custom user config must be created under <code>config/users/</code> (see below).</li>\n<li>Finetuning: the main config is in <code>configs/panza_finetuning.yaml</code> and the method-specific ones are in <code>configs/finetuning/</code></li>\n<li>Serving: Serving consists of two parts - a serving infrastructure (that we call 'writer') that runs the LLM and so converts prompts to Panza outputs, and an <code>interface</code>, which presents the outputs in a useful form - through a command-line interface, a web interface, a gmail client, or in a bulk <code>.json</code> format (useful for evaluation). The configs for serving are in <code>panza_writer.yaml</code>, and for the interfaces, under <code>configs/interfaces</code>.</li>\n</ol>\n<p>These scripts are described in more detail in <code>scripts/README.md</code>, but a few customizations need to happen immediately.\n:warning: Before continuing, make sure you complete the following setup:</p>\n<ul>\n<li>Perform the following modifications on <code>users/default.yaml</code> directly. If running Panza for multiple users, copy this file to, for example, <code>users/jen.yaml</code> and specify the user in Panza training commands.</li>\n<li>In the user config, set the email address and username. The email address should be the sender address in the exported emails. (Panza uses this to edit out responses and other emails sent by a different author in the <code>.mbox</code> dump.). The username does not have to link to the email itself - it is simply used as a name for the various data files that will come out of the data preparation process. A handy way to set this is if you set it to be the output of the <code>whoami</code> call in your shell.</li>\n<li>Modify the personal prompt in <code>prompt_preambles/user_preamble.txt</code> to include some basic information about yourself that Panza can use to customize your emails with your correct full name, address, phone number, etc.</li>\n</ul>\n<p>Additionally, please perform the following login steps to be able to download the base model.</p>\n<ul>\n<li>Login to Hugging Face to be able to download pretrained models: <code>huggingface-cli login</code>.</li>\n<li>[Optional] Login to Weights &amp; Biases to log metrics during training: <code>wandb login</code>. Then, set <code>wandb_disabled=false</code> in <code>configs/finetuning/base.yaml</code>.</li>\n</ul>\n<p>You are now ready to move to <code>scripts</code>.</p>\n<p></p><h3>Step 2: Extract emails</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#step-2-extract-emails\"></a><p></p>\n<p>Run <code>CUDA_VISIBLE_DEVICES=X ./prepare_data.sh</code>.</p><details>\n<summary> This scripts takes care of all the prerequisites before training (expand for details). </summary>\n<div><pre><code>- Extracts your emails in text format to `data/&lt;username&gt;_clean.jsonl` which you can manually inspect.\n- Creates synthetic prompts for your emails as described in the [data playback](#film_projector-step-1-data-playback) section. The results are stored in `data/&lt;username&gt;_clean_summarized.jsonl` and you can inspect the `\"summary\"` field.\n- Splits data into training and test subsets. See `data/train.jsonl` and `data/test.jsonl`.\n- Creates a vector database from the embeddings of the training emails which will later be used for *Retrieval-Augmented Generation (RAG)*. See `data/&lt;username&gt;.pkl` and `data/&lt;username&gt;.faiss`.\n</code></pre></div>\n</details>\n<p><strong>NB</strong>: if you did not change the default configuration in <code>user/default.yaml</code> to reflect your particulars but rather created a new file, you need to add the additional flag to the above command where you specify <code>user=x</code> where your config file was named <code>x.yaml</code>.</p>\n<details>\n <summary> FAQs. </summary>\n When running the above script, you may encounter an <code>OutOfMemoryError</code>. If this is the case, you can either:\n <ol>\n <li> Reduce the batch size for the data processing step. This can be found in <code>configs/panza_preparation.yaml</code>.\n </li><li> Move to a machine that has more memory.\n </li></ol>\n </details>\n<p></p><h3>Step 3: Train a LLM on your emails</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#step-3-train-a-llm-on-your-emails\"></a><p></p>\n<p>We currently support <code>LLaMA3-8B-Instruct</code> and <code>Mistral-Instruct-v0.2</code> LLMs as base models; the former is the default, but we obtained good results with either model.</p>\n<ol>\n<li>\n<p>[Recommended] For parameter efficient fine-tuning, run <code>./train_rosa.sh</code>.<br />\nIf a larger GPU is available and full-parameter fine-tuning is possible, run <code>./train_fft.sh</code>.</p>\n</li>\n<li>\n<p>We have prepopulated the training configs with parameter values that worked best for us. We recommend you try those first, but you can also experiment with different hyper-parameters by passing extra arguments to the training script, such as <code>lr</code>, <code>lora_lr</code>, <code>num_epochs</code>. All the trained models are saved in the <code>checkpoints</code> directory.</p>\n</li>\n</ol>\n<p>Examples:</p>\n<div><pre>CUDA_VISIBLE_DEVICES=X ./train_rosa.sh <span><span>#</span> Will use the default parameters.</span>\nCUDA_VISIBLE_DEVICES=X ./train_rosa.sh finetuning.lr=1e-6 finetuning.rosa_lr=1e-6 finetuning.max_duration=7ep</pre></div>\n<p>On a smaller GPU, it may be necessary to further train in lower precision (QRoSA). This can be run as follows:</p>\n<div><pre>./train_rosa.sh finetuning.precision=amp_bf16 finetuning.model.weight_bias_dtype=4bit</pre></div>\n<details>\n <summary> FAQs. </summary>\n The bash scripts that are used to execute the finetuning procedure assume by default that your username is what is returned by the <code>whoami</code> command. This is used to locate the name of the user configs inside the <code>configs/user</code> directory as above. If you directly modified <code>default.yaml</code>, or created another yaml file where the name of that file does not match with the output of <code>whoami</code>, there will be an error. This is an easy fix. You can either:\n <ol>\n <li> Change the name of the yaml file to be the output of <code>whoami</code>.\n </li><li> You can override the username manually when you launch the bash script by adding <code>user=x</code> where <code>x</code> is the name of the yaml file you created. For example: <code>./train_rosa.sh user=alonso</code>\n </li></ol>\n <br />\n If you wish to add <code>CUDA_VISIBLE_DEVICES</code> to specify a specific GPU, please add this in the shell script directly by <code>export CUDA_VISIBLE_DEVICES=x</code> where <code>x</code> is the ID of the GPU you wish to use.\n <p>\n A known issue is that when you fine-tune your model with RAG, there can be a case when the tokenization of the dataset seemingly hangs. This is due to a known bug with with HF's <code>map</code> function where <code>n_proc&gt;1</code>. To alleviate this issue, you can set <code>torch.set_num_threads(1)</code> in <code>src/panza/finetuning/train.py</code> or set the equivalent parameter in <code>configs/finetuning/rosa.yaml</code>.\n </p></details>\n<p></p><h3>Step 5: Launch Panza!</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#step-5-launch-panza\"></a><p></p>\n<ul>\n<li>To run Panza after a full training run, run a command like <code>CUDA_VISIBLE_DEVICES=0 ./runner.sh user=USERNAME interfaces=cli writer/llm=transformers checkpoint=latest</code>.</li>\n<li>To run Panza after a RoSA or LoRA training run, replace <code>writer/llm=transformers</code> with <code>writer/llm=peft</code></li>\n</ul>\n<p></p><h3>🆕 Use Panza in Google Chrome directly with your Gmail!</h3><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#new-use-panza-in-google-chrome-directly-with-your-gmail\"></a><p></p>\n<p>In addition to the Panza package itself, we have also created a tool that will allow you to use Panza directly within your Gmail session. We have published\nthis extension on <a target=\"_blank\" href=\"https://chromewebstore.google.com/detail/panzaextension/njmkmdbgneiaoahngollkmejoinnaicm?authuser=4&amp;hl=en\">Google Chrome here</a>. Here is a written guide on how to get this setup below.</p>\n<ul>\n<li>Launch the Panza web server: Instead of using the cli as an interface above, we execute the following command: <code>CUDA_VISIBLE_DEVICES=0 API_KEYS=panza_beta ./runner.sh user=USERNAME interfaces=web writer/llm=peft checkpoint=latest</code>.\n<ol>\n<li>We have to choose an API key that the server will use. Since the browser extension we have created is a beta release, the API_KEY by default is <code>panza_beta</code>.</li>\n<li>Executing this script spins up a web server on port 5001 by default. The port can be changed in the <code>configs/interfaces/web.json</code> file. However, our browser extension sends API requests to <code>localhost:5001</code> only in this beta version.</li>\n</ol>\n</li>\n<li>[Optionally add port forwarding] If you are not running the Panza web server on the same device where Google Chrome is installed, you will be unable to make requests to a server with a reference to <code>localhost</code>. To correctly use the server, you will have to enable port forwarding from the remote machine to your local device. This is done by VSCode automatically if you are SSH'ed into a remote server, and spin up Panza there.</li>\n<li>Install the <a target=\"_blank\" href=\"https://chromewebstore.google.com/detail/panzaextension/njmkmdbgneiaoahngollkmejoinnaicm?authuser=4&amp;hl=en\">Google Chrome extension here</a>.\nNow we that we have setup all the necessary pieces to use Panza, you can use it directly within your Gmail. To do so, simply write a prompt in the main message box, and click the Panza icon in the tool bar (as seen in the GIF below), and let Panza take care of the rest!</li>\n</ul>\n<p><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail/blob/main/panza_ext.gif\"><img src=\"https://github.com/IST-DASLab/PanzaMail/raw/main/panza_ext.gif\" /></a></p>\n<p>📧 <strong>Have fun with your new email writing assistant!</strong> 📧</p>\n<p></p><h2>🔬 Advanced usage</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#microscope-advanced-usage\"></a><p></p>\n<ul>\n<li><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail/blob/main/scripts/README.md#cpu-inference-with-ollama\">Inference on CPU with Ollama</a></li>\n<li><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail/blob/main/scripts/README.md#data-guide\">Data Preparation Guide</a></li>\n<li><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail/blob/main/scripts/README.md#hyper-parameter-tuning-guide\">Hyper-Parameter Tuning Guide</a></li>\n<li><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail/blob/main/prompt_preambles/README.md\">Prompt Preambles Tutorial</a></li>\n</ul>\n<p></p><h2>👩‍💻 Contributing</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#woman_technologist-contributing\"></a><p></p>\n<p>If you liked our work and want to contribute to improve the system, please feel free to do so! Make a <em>fork</em> of our repository and once you have made your changes, submit a pull request so that we can review!</p>\n<p>One thing to mention: we want to make sure that we all adhere to the same coding standards, so we have added Black, a code formatter, as a prehook. To ensure that all your files are formatted with Black, do the following:</p>\n<ol>\n<li>Install the necessary dependencies</li>\n</ol>\n<div><pre><code>pip install .[contributing]\n</code></pre></div>\n<ol>\n<li>Run the precommit command</li>\n</ol>\n<ol>\n<li>Continue adding code as usual. All your code will be formatted by Black before commiting!</li>\n</ol>\n<p></p><h2>Privacy Statement</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#privacy-statement\"></a><p></p>\n<p>The goal of Panza is to give users full control of their data and models trained on it. As such, no part of Panza, including the Chrome/GMail plugin collects any information about its users, outside of the normal summary statistics collected by Github and Google (such as the number of stars/forks/downloads). If you choose to run any part of Panza on a hosted service, e.g., on Amazon Web Services or Google Colab, we take no responsibility for any data collection or data breaches that may occur. Additionally, running the Panza web client or the GUI interface (via Gradio) risks providing unauthorized access to the models. Please use at your own risk.</p>\n<p></p><h2>Authors</h2><a target=\"_blank\" href=\"https://github.com/IST-DASLab/PanzaMail#authors\"></a><p></p>\n<p>Panza was conceived by Nir Shavit and Dan Alistarh and built by the <a target=\"_blank\" href=\"https://ist.ac.at/en/research/alistarh-group/\">Distributed Algorithms and Systems group</a> at IST Austria. The contributors are (in alphabetical order):</p>\n<p>Dan Alistarh, Eugenia Iofinova, Andrej Jovanovic, Eldar Kurtic, Ilya Markov, Armand Nicolicioiu, Mahdi Nikdan, Andrei Panferov, Nir Shavit, and Sean Yang.</p>\n<p>Contact: <a target=\"_blank\" href=\"mailto:[email protected]\">[email protected]</a></p>\n<p>We thank our collaborators Michael Goin and Tony Wang at NeuralMagic and MIT for their helpful testing and feedback.</p>\n</article></div>",
"author": "",
"favicon": "https://github.githubassets.com/favicons/favicon.svg",
"source": "github.com",
"published": "",
"ttr": 464,
"type": "object"
}