Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/tutorial/fate_llm/GPT2-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Federated GPT-2 Tuning with Parameter Efficient methods in FATE-1.11"
"# Federated GPT-2 Tuning with Parameter Efficient methods in FATE-LLM"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial, we will demonstrate how to efficiently train federated large language models using the FATE 1.11 framework. In FATE-1.11, we introduce the \"pellm\"(Parameter Efficient Large Language Model) module, specifically designed for federated learning with large language models. We enable the implementation of parameter-efficient methods in federated learning, reducing communication overhead while maintaining model performance. In this tutorial we particularlly focus on GPT-2, and we will also emphasize the use of the Adapter mechanism for fine-tuning GPT-2, which enables us to effectively reduce communication volume and improve overall efficiency.\n",
"In this tutorial, we will demonstrate how to efficiently train federated large language models using the FATE-LLM framework. In FATE-LLM, we introduce the \"pellm\"(Parameter Efficient Large Language Model) module, specifically designed for federated learning with large language models. We enable the implementation of parameter-efficient methods in federated learning, reducing communication overhead while maintaining model performance. In this tutorial we particularlly focus on GPT-2, and we will also emphasize the use of the Adapter mechanism for fine-tuning GPT-2, which enables us to effectively reduce communication volume and improve overall efficiency.\n",
"\n",
"By following this tutorial, you will learn how to leverage the FATE framework to rapidly fine-tune federated large language models, such as GPT-2, with ease and efficiency."
]
Expand Down
4 changes: 2 additions & 2 deletions doc/tutorial/fate_llm/GPT2-multi-task.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Multi-Task Federated Learning with GPT-2 using FATE-1.11"
"# Multi-Task Federated Learning with GPT-2 using FATE-LLM"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial, we will explore the implementation of multi-task federated learning with LM: GPT-2 using the FATE-1.11 framework. FATE-1.11 provides the \"pellm\" module for efficient federated learning. It is specifically designed for large language models in a federated setting.\n",
"In this tutorial, we will explore the implementation of multi-task federated learning with LM: GPT-2 using the FATE-LLM framework. FATE-LLM provides the \"pellm\" module for efficient federated learning. It is specifically designed for large language models in a federated setting.\n",
"\n",
"Multi-task learning involves training a model to perform multiple tasks simultaneously. In this tutorial, we will focus on two tasks - sentiment classification and named entity recognition (NER) - and show how they can be combined with GPT-2 in a federated learning setting. We will use the IMDB sentiment analysis dataset and the CoNLL-2003 NER dataset for our tasks.\n",
"\n",
Expand Down