You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/tutorial/fate_llm/GPT2-example.ipynb
+7-7Lines changed: 7 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -5,15 +5,15 @@
5
5
"cell_type": "markdown",
6
6
"metadata": {},
7
7
"source": [
8
-
"# Federated GPT-2 Tuning with Parameter Efficient methods in FATE-1.11"
8
+
"# Federated GPT-2 Tuning with Parameter Efficient methods in FATE-LLM"
9
9
]
10
10
},
11
11
{
12
12
"attachments": {},
13
13
"cell_type": "markdown",
14
14
"metadata": {},
15
15
"source": [
16
-
"In this tutorial, we will demonstrate how to efficiently train federated large language models using the FATE 1.11 framework. In FATE-1.11, we introduce the \"pellm\"(Parameter Efficient Large Language Model) module, specifically designed for federated learning with large language models. We enable the implementation of parameter-efficient methods in federated learning, reducing communication overhead while maintaining model performance. In this tutorial we particularlly focus on GPT-2, and we will also emphasize the use of the Adapter mechanism for fine-tuning GPT-2, which enables us to effectively reduce communication volume and improve overall efficiency.\n",
16
+
"In this tutorial, we will demonstrate how to efficiently train federated large language models using the FATE-LLM framework. In FATE-LLM, we introduce the \"pellm\"(Parameter Efficient Large Language Model) module, specifically designed for federated learning with large language models. We enable the implementation of parameter-efficient methods in federated learning, reducing communication overhead while maintaining model performance. In this tutorial we particularlly focus on GPT-2, and we will also emphasize the use of the Adapter mechanism for fine-tuning GPT-2, which enables us to effectively reduce communication volume and improve overall efficiency.\n",
17
17
"\n",
18
18
"By following this tutorial, you will learn how to leverage the FATE framework to rapidly fine-tune federated large language models, such as GPT-2, with ease and efficiency."
Copy file name to clipboardExpand all lines: doc/tutorial/fate_llm/GPT2-multi-task.ipynb
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -5,15 +5,15 @@
5
5
"cell_type": "markdown",
6
6
"metadata": {},
7
7
"source": [
8
-
"# Multi-Task Federated Learning with GPT-2 using FATE-1.11"
8
+
"# Multi-Task Federated Learning with GPT-2 using FATE-LLM"
9
9
]
10
10
},
11
11
{
12
12
"attachments": {},
13
13
"cell_type": "markdown",
14
14
"metadata": {},
15
15
"source": [
16
-
"In this tutorial, we will explore the implementation of multi-task federated learning with LM: GPT-2 using the FATE-1.11 framework. FATE-1.11 provides the \"pellm\" module for efficient federated learning. It is specifically designed for large language models in a federated setting.\n",
16
+
"In this tutorial, we will explore the implementation of multi-task federated learning with LM: GPT-2 using the FATE-LLM framework. FATE-LLM provides the \"pellm\" module for efficient federated learning. It is specifically designed for large language models in a federated setting.\n",
17
17
"\n",
18
18
"Multi-task learning involves training a model to perform multiple tasks simultaneously. In this tutorial, we will focus on two tasks - sentiment classification and named entity recognition (NER) - and show how they can be combined with GPT-2 in a federated learning setting. We will use the IMDB sentiment analysis dataset and the CoNLL-2003 NER dataset for our tasks.\n",
0 commit comments