Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
7305531
stream_function should raise error
joowon-dm-snu Jul 26, 2023
72b1fb0
Merge branch 'main' of https://github.com/coxwave/semantic-kernel
joowon-dm-snu Jul 29, 2023
25cb2bd
implement stepwise planner
joowon-dm-snu Jul 30, 2023
84be427
add bing settings
joowon-dm-snu Jul 30, 2023
c22bcbb
add tests
joowon-dm-snu Jul 30, 2023
98cf1e1
revert commit comes from other branch
joowon-dm-snu Jul 30, 2023
4ee2cd8
fix errors and missing implementations
joowon-dm-snu Aug 1, 2023
2f584a0
update planner
joowon-dm-snu Aug 3, 2023
513dd98
update tests
joowon-dm-snu Aug 3, 2023
bc61d2e
propagate stop sequence to ai service (temporal)
joowon-dm-snu Aug 3, 2023
2a80196
Merge branch 'main' into python-add-stepwise-planner
joowon-dm-snu Aug 3, 2023
e51a38c
Merge branch 'python-add-stepwise-planner' of https://github.com/coxw…
joowon-dm-snu Aug 3, 2023
0f660b2
update tests
joowon-dm-snu Aug 3, 2023
8a1b563
prompt engineering
joowon-dm-snu Aug 3, 2023
859b9d6
fix errors
joowon-dm-snu Aug 3, 2023
c4578f6
run pre-commit
joowon-dm-snu Aug 3, 2023
665a904
Merge branch 'main' into python-add-stepwise-planner
awharrison-28 Aug 4, 2023
77443b4
Update settings.py
joowon-dm-snu Aug 8, 2023
e14dcf5
Merge branch 'main' into python-add-stepwise-planner
awharrison-28 Aug 9, 2023
d7fbb3d
updates for reviews
joowon-dm-snu Aug 10, 2023
b59e405
Merge branch 'main' into python-add-stepwise-planner
joowon-dm-snu Aug 10, 2023
1568a16
update test
joowon-dm-snu Aug 10, 2023
cff892c
revert mistake
joowon-dm-snu Aug 10, 2023
424f846
fix error, for #2329(skill definition changed)
joowon-dm-snu Aug 10, 2023
9f3b420
Merge branch 'main' into python-add-stepwise-planner
joowon-dm-snu Aug 14, 2023
0aabaa8
change for review
joowon-dm-snu Aug 16, 2023
2017649
Merge branch 'main' into python-add-stepwise-planner
awharrison-28 Aug 17, 2023
488ac8f
Merge branch 'main' into python-add-stepwise-planner
lemillermicrosoft Aug 22, 2023
8e944ec
🔄 Reorder and update chat completion parameters
lemillermicrosoft Aug 22, 2023
a8e1b34
Merge branch 'main' into python-add-stepwise-planner
lemillermicrosoft Aug 22, 2023
19cff78
Merge branch 'main' into python-add-stepwise-planner
awharrison-28 Aug 22, 2023
6933d14
Merge branch 'main' into python-add-stepwise-planner
awharrison-28 Aug 23, 2023
4b172d4
precommit checks
awharrison-28 Aug 23, 2023
0d63abb
Merge branch 'main' into python-add-stepwise-planner
awharrison-28 Aug 23, 2023
127926d
update regex
joowon-dm-snu Aug 23, 2023
2095562
update test case more stable
joowon-dm-snu Aug 23, 2023
15bc72b
precommit checks
joowon-dm-snu Aug 23, 2023
6565df7
change test case
joowon-dm-snu Aug 23, 2023
76d6bc0
Merge branch 'main' into python-add-stepwise-planner
awharrison-28 Aug 23, 2023
396d872
Merge branch 'main' into python-add-stepwise-planner
lemillermicrosoft Aug 23, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions python/semantic_kernel/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from semantic_kernel.utils.null_logger import NullLogger
from semantic_kernel.utils.settings import (
azure_openai_settings_from_dot_env,
bing_search_settings_from_dot_env,
google_palm_settings_from_dot_env,
openai_settings_from_dot_env,
pinecone_settings_from_dot_env,
Expand All @@ -29,6 +30,7 @@
"azure_openai_settings_from_dot_env",
"postgres_settings_from_dot_env",
"pinecone_settings_from_dot_env",
"bing_search_settings_from_dot_env",
"google_palm_settings_from_dot_env",
"PromptTemplateConfig",
"PromptTemplate",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) Microsoft. All rights reserved.

from dataclasses import dataclass, field
from typing import TYPE_CHECKING, Dict
from typing import TYPE_CHECKING, Dict, List

if TYPE_CHECKING:
from semantic_kernel.semantic_functions.prompt_template_config import (
Expand All @@ -18,16 +18,19 @@ class ChatRequestSettings:
number_of_responses: int = 1
max_tokens: int = 256
token_selection_biases: Dict[int, int] = field(default_factory=dict)
stop_sequences: List[str] = field(default_factory=list)

def update_from_completion_config(
self, completion_config: "PromptTemplateConfig.CompletionConfig"
):
self.temperature = completion_config.temperature
self.top_p = completion_config.top_p
self.presence_penalty = completion_config.presence_penalty
self.frequency_penalty = completion_config.frequency_penalty
self.number_of_responses = completion_config.number_of_responses
self.stop_sequences = completion_config.stop_sequences
self.max_tokens = completion_config.max_tokens
self.presence_penalty = completion_config.presence_penalty
self.frequency_penalty = completion_config.frequency_penalty
self.token_selection_biases = completion_config.token_selection_biases

@staticmethod
def from_completion_config(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,15 +101,8 @@ async def complete_async(
str -- The completed text.
"""
prompt_to_message = [("user", prompt)]
chat_settings = ChatRequestSettings(
temperature=request_settings.temperature,
top_p=request_settings.top_p,
presence_penalty=request_settings.presence_penalty,
frequency_penalty=request_settings.frequency_penalty,
max_tokens=request_settings.max_tokens,
number_of_responses=request_settings.number_of_responses,
token_selection_biases=request_settings.token_selection_biases,
)
chat_settings = ChatRequestSettings.from_completion_config(request_settings)

response = await self._send_chat_request(
prompt_to_message, chat_settings, False
)
Expand All @@ -131,6 +124,7 @@ async def complete_stream_async(
max_tokens=request_settings.max_tokens,
number_of_responses=request_settings.number_of_responses,
token_selection_biases=request_settings.token_selection_biases,
stop_sequences=request_settings.stop_sequences,
)
response = await self._send_chat_request(prompt_to_message, chat_settings, True)

Expand Down Expand Up @@ -205,11 +199,17 @@ async def _send_chat_request(
messages=formatted_messages,
temperature=request_settings.temperature,
top_p=request_settings.top_p,
presence_penalty=request_settings.presence_penalty,
frequency_penalty=request_settings.frequency_penalty,
max_tokens=request_settings.max_tokens,
n=request_settings.number_of_responses,
stream=stream,
stop=(
request_settings.stop_sequences
if request_settings.stop_sequences is not None
and len(request_settings.stop_sequences) > 0
else None
),
max_tokens=request_settings.max_tokens,
presence_penalty=request_settings.presence_penalty,
frequency_penalty=request_settings.frequency_penalty,
logit_bias=(
request_settings.token_selection_biases
if request_settings.token_selection_biases is not None
Expand Down
4 changes: 3 additions & 1 deletion python/semantic_kernel/planning/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,12 @@
from semantic_kernel.planning.basic_planner import BasicPlanner
from semantic_kernel.planning.plan import Plan
from semantic_kernel.planning.sequential_planner import SequentialPlanner
from semantic_kernel.planning.stepwise_planner import StepwisePlanner

__all__ = [
"SequentialPlanner",
"BasicPlanner",
"Plan",
"SequentialPlanner",
"StepwisePlanner",
"ActionPlanner",
]
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{
"schema": 1,
"description": "Given a request or command or goal generate multi-step plan to reach the goal. After each step LLM is called to perform the reasoning for the next step.",
"type": "completion",
"completion": {
"max_tokens": 1024,
"temperature": 0,
"top_p": 0,
"presence_penalty": 0,
"frequency_penalty": 0,
"stop_sequences": ["[OBSERVATION]", "\n[THOUGHT]"]
},
"input": {
"parameters": [
{
"name": "question",
"description": "The question to answer",
"defaultValue": ""
},
{
"name": "agentScratchPad",
"description": "The agent's scratch pad",
"defaultValue": ""
},
{
"name": "functionDescriptions",
"description": "The manual of the agent's functions",
"defaultValue": ""
}
]
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
[INSTRUCTION]
Answer the following questions as accurately as possible using the provided functions.

[AVAILABLE FUNCTIONS]
The function definitions below are in the following format:
<functionName>: <description>
inputs:
- <parameterName>: <parameterDescription>
- ...

{{$function_descriptions}}
[END AVAILABLE FUNCTIONS]

[USAGE INSTRUCTIONS]
To use the functions, specify a JSON blob representing an action. The JSON blob should contain an "action" key with the name of the function to use, and an "action_variables" key with a JSON object of string values to use when calling the function.
Do not call functions directly; they must be invoked through an action.
The "action_variables" value should always include an "input" key, even if the input value is empty. Additional keys in the "action_variables" value should match the defined [PARAMETERS] of the named "action" in [AVAILABLE FUNCTIONS].
Dictionary values in "action_variables" must be strings and represent the actual values to be passed to the function.
Ensure that the $JSON_BLOB contains only a SINGLE action; do NOT return multiple actions.
IMPORTANT: Use only the available functions listed in the [AVAILABLE FUNCTIONS] section. Do not attempt to use any other functions that are not specified.

Here is an example of a valid $JSON_BLOB:
{
"action": "functionName",
"action_variables": {"parameterName": "some value", ...}
}
[END USAGE INSTRUCTIONS]
[END INSTRUCTION]

[THOUGHT PROCESS]
[QUESTION]
the input question I must answer
[THOUGHT]
To solve this problem, I should carefully analyze the given question and identify the necessary steps. Any facts I discover earlier in my thought process should be repeated here to keep them readily available.
[ACTION]
{
"action": "functionName",
"action_variables": {"parameterName": "some value", ...}
}
[OBSERVATION]
The result of the action will be provided here.
... (These Thought/Action/Observation can repeat until the final answer is reached.)
[FINAL ANSWER]
Once I have gathered all the necessary observations and performed any required actions, I can provide the final answer in a clear and human-readable format.
[END THOUGHT PROCESS]

Let's break down the problem step by step and think about the best approach. Questions and observations should be followed by a single thought and an optional single action to take.

Begin!

[QUESTION]
{{$question}}
{{$agent_scratch_pad}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from semantic_kernel.planning.stepwise_planner.stepwise_planner import StepwisePlanner

__all__ = [
"StepwisePlanner",
]
Loading