You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[2024/07] You can now install our package with `pip install knowledge-storm`!
13
14
-[2024/07] We add `VectorRM` to support grounding on user-provided documents, complementing existing support of search engines (`YouRM`, `BingSearch`). (check out [#58](https://github.com/stanford-oval/storm/pull/58))
14
15
-[2024/07] We release demo light for developers a minimal user interface built with streamlit framework in Python, handy for local development and demo hosting (checkout [#54](https://github.com/stanford-oval/storm/pull/54))
15
16
-[2024/06] We will present STORM at NAACL 2024! Find us at Poster Session 2 on June 17 or check our [presentation material](assets/storm_naacl2024_slides.pdf).
16
-
-[2024/05] We add Bing Search support in [rm.py](src/rm.py). Test STORM with `GPT-4o` - we now configure the article generation part in our demo using `GPT-4o` model.
17
-
-[2024/04] We release refactored version of STORM codebase! We define [interface](src/interface.py) for STORM pipeline and reimplement STORM-wiki (check out [`src/storm_wiki`](src/storm_wiki)) to demonstrate how to instantiate the pipeline. We provide API to support customization of different language models and retrieval/search integration.
17
+
-[2024/05] We add Bing Search support in [rm.py](knowledge_storm/rm.py). Test STORM with `GPT-4o` - we now configure the article generation part in our demo using `GPT-4o` model.
18
+
-[2024/04] We release refactored version of STORM codebase! We define [interface](knowledge_storm/interface.py) for STORM pipeline and reimplement STORM-wiki (check out [`src/storm_wiki`](knowledge_storm/storm_wiki)) to demonstrate how to instantiate the pipeline. We provide API to support customization of different language models and retrieval/search integration.
3. Set up OpenAI API key (if you want to use OpenAI models to power STORM) and [You.com search API](https://api.you.com/) key. Create a file `secrets.toml` under the root directory and add the following content:
68
+
69
+
70
+
## API
71
+
The STORM knowledge curation engine is defined as a simple Python `STORMWikiRunner` class.
72
+
73
+
As STORM is working in the information curation layer, you need to set up the information retrieval module and language model module to create a `STORMWikiRunner` instance. Here is an example of using You.com search engine and OpenAI models.
74
+
```python
75
+
import os
76
+
from knowledge_storm import STORMWikiRunnerArguments, STORMWikiRunner, STORMWikiLMConfigs
77
+
from knowledge_storm.lm import OpenAIModel
78
+
from knowledge_storm.rm import YouRM
79
+
80
+
lm_configs = STORMWikiLMConfigs()
81
+
openai_kwargs = {
82
+
'api_key': os.getenv("OPENAI_API_KEY"),
83
+
'temperature': 1.0,
84
+
'top_p': 0.9,
85
+
}
86
+
# STORM is a LM system so different components can be powered by different models to reach a good balance between cost and quality.
87
+
# For a good practice, choose a cheaper/faster model for `conv_simulator_lm` which is used to split queries, synthesize answers in the conversation.
88
+
# Choose a more powerful model for `article_gen_lm` to generate verifiable text with citations.
- `OpenAIModel`, `AzureOpenAIModel`, `ClaudeModel`, `VLLMClient`, `TGIClient`, `TogetherClient`, `OllamaClient` as language model components
104
+
- `YouRM`, `BingSearch`, `VectorRM` as retrieval module components
105
+
106
+
:star2: **PRs for integrating more language models into [knowledge_storm/lm.py](knowledge_storm/lm.py) and search engines/retrievers into [knowledge_storm/rm.py](knowledge_storm/rm.py) are highly appreciated!**
107
+
108
+
The `STORMWikiRunner` instance can be evoked with the simple `run` method:
109
+
```python
110
+
topic = input('Topic: ')
111
+
runner.run(
112
+
topic=topic,
113
+
do_research=True,
114
+
do_generate_outline=True,
115
+
do_generate_article=True,
116
+
do_polish_article=True,
117
+
)
118
+
runner.post_run()
119
+
runner.summary()
120
+
```
121
+
- `do_research`: if True, simulate conversations with difference perspectives to collect information about the topic; otherwise, load the results.
122
+
- `do_generate_outline`: if True, generate an outline for the topic; otherwise, load the results.
123
+
- `do_generate_article`: if True, generate an article for the topic based on the outline and the collected information; otherwise, load the results.
124
+
- `do_polish_article`: if True, polish the article by adding a summarization section and (optionally) removing duplicate content; otherwise, load the results.
125
+
126
+
127
+
## Quick Start with Example Scripts
128
+
129
+
We provide scripts in our [examples folder](examples) as a quick start to run STORM with different configurations.
130
+
131
+
**To run STORM with `gpt` family models with default configurations:**
132
+
1. We suggest using `secrets.toml` to set up the API keys. Create a file `secrets.toml` under the root directory and add the following content:
68
133
```shell
69
134
# Set up OpenAI API key.
70
135
OPENAI_API_KEY="your_openai_api_key"
@@ -77,74 +142,31 @@ Below, we provide a quick start guide to run STORM locally.
77
142
# Set up You.com search API key.
78
143
YDC_API_KEY="your_youcom_api_key"
79
144
```
145
+
2. Run the following command.
146
+
```
147
+
python examples/run_storm_wiki_gpt.py \
148
+
--output-dir $OUTPUT_DIR \
149
+
--retriever you \
150
+
--do-research \
151
+
--do-generate-outline \
152
+
--do-generate-article \
153
+
--do-polish-article
154
+
```
80
155
156
+
**To run STORM using your favorite language models or grounding on your own corpus:** Check out [examples/README.md](examples/README.md).
81
157
82
-
### 2. Running STORM-wiki locally
83
-
84
-
**To run STORM with `gpt` family models with default configurations**: Make sure you have set up the OpenAI API key and run the following command.
85
-
86
-
```
87
-
python examples/run_storm_wiki_gpt.py \
88
-
--output-dir $OUTPUT_DIR \
89
-
--retriever you \
90
-
--do-research \
91
-
--do-generate-outline \
92
-
--do-generate-article \
93
-
--do-polish-article
94
-
```
95
-
- `--do-research`: if True, simulate conversation to research the topic; otherwise, load the results.
96
-
- `--do-generate-outline`: If True, generate an outline for the topic; otherwise, load the results.
97
-
- `--do-generate-article`: If True, generate an article for the topic; otherwise, load the results.
98
-
- `--do-polish-article`: If True, polish the article by adding a summarization section and (optionally) removing duplicate content.
99
-
100
-
101
-
We provide more example scripts under [`examples`](examples) to demonstrate how you can run STORM using your favorite language models or grounding on your own corpus.
102
-
103
-
104
-
## Customize STORM
105
158
106
-
### Customization of the Pipeline
159
+
## Customization of the Pipeline
107
160
108
-
Besides running scripts in `examples`, you can customize STORM based on your own use case. STORM engine consists of 4 modules:
161
+
If you have installed the source code, you can customize STORM based on your own use case. STORM engine consists of 4 modules:
109
162
110
163
1. Knowledge Curation Module: Collects a broad coverage of information about the given topic.
111
164
2. Outline Generation Module: Organizes the collected information by generating a hierarchical outline for the curated knowledge.
112
165
3. Article Generation Module: Populates the generated outline with the collected information.
113
166
4. Article Polishing Module: Refines and enhances the written article for better presentation.
114
167
115
-
The interface for each module is defined in `src/interface.py`, while their implementations are instantiated in `src/storm_wiki/modules/*`. These modules can be customized according to your specific requirements (e.g., generating sections in bullet point format instead of full paragraphs).
116
-
117
-
:star2: **You can share your customization of `Engine` by making PRs to this repo!**
118
-
119
-
### Customization of Retriever Module
120
-
121
-
As a knowledge curation engine, STORM grabs information from the Retriever module. The Retriever modules are implemented in [`src/rm.py`](src/rm.py). Currently, STORM supports the following retrievers:
168
+
The interface foreach module is definedin`knowledge_storm/interface.py`, while their implementations are instantiated in`knowledge_storm/storm_wiki/modules/*`. These modules can be customized according to your specific requirements (e.g., generating sections in bullet point format instead of full paragraphs).
122
169
123
-
- `YouRM`: You.com search engine API
124
-
- `BingSearch`: Bing Search API
125
-
- `VectorRM`: a retrieval model that retrieves information from user provide corpus
126
-
127
-
:star2: **PRs for integrating more search engines/retrievers are highly appreciated!**
128
-
129
-
### Customization of Language Models
130
-
131
-
STORM provides the following language model implementations in [`src/lm.py`](src/lm.py):
132
-
133
-
- `OpenAIModel`
134
-
- `ClaudeModel`
135
-
- `VLLMClient`
136
-
- `TGIClient`
137
-
- `TogetherClient`
138
-
139
-
:star2: **PRs for integrating more language model clients are highly appreciated!**
140
-
141
-
:bulb: **For a good practice,**
142
-
143
-
- choose a cheaper/faster model for `conv_simulator_lm` which is used to split queries, synthesize answers in the conversation.
144
-
- if you need to conduct the actual writing step, choose a more powerful model for `article_gen_lm`. Based on our experiments, weak models are bad at generating text with citations.
145
-
- for open models, adding one-shot example can help it better follow instructions.
146
-
147
-
Please refer to the scripts in the [`examples`](examples) directory for concrete guidance on customizing the language model used in the pipeline.
148
170
149
171
## Replicate NAACL2024 result
150
172
@@ -157,7 +179,7 @@ Please switch to the branch `NAACL-2024-code-backup`
157
179
158
180
The FreshWiki dataset used in our experiments can be found in [./FreshWiki](FreshWiki).
159
181
160
-
Run the following commands under [./src](src).
182
+
Run the following commands under [./src](knowledge_storm).
The generated article will be saved in`{output_dir}/{topic}/storm_gen_article.txt` and the references corresponding to citation index will be saved in`{output_dir}/{topic}/url_to_info.json`. If `--do-polish-article` is set, the polished article will be saved in`{output_dir}/{topic}/storm_gen_article_polished.txt`.
197
219
198
220
### Customize the STORM Configurations
199
-
We set up the default LLM configuration in `LLMConfigs` in [src/modules/utils.py](src/modules/utils.py). You can use `set_conv_simulator_lm()`,`set_question_asker_lm()`, `set_outline_gen_lm()`, `set_article_gen_lm()`, `set_article_polish_lm()` to override the default configuration. These functions take in an instance from `dspy.dsp.LM` or `dspy.dsp.HFModel`.
221
+
We set up the default LLM configuration in`LLMConfigs`in [src/modules/utils.py](knowledge_storm/modules/utils.py). You can use `set_conv_simulator_lm()`,`set_question_asker_lm()`, `set_outline_gen_lm()`, `set_article_gen_lm()`, `set_article_polish_lm()` to override the default configuration. These functions take in an instance from `dspy.dsp.LM` or `dspy.dsp.HFModel`.
200
222
201
223
202
224
### Automatic Evaluation
@@ -224,7 +246,11 @@ For rubric grading, we use the [prometheus-13b-v1.0](https://huggingface.co/prom
224
246
225
247
</details>
226
248
227
-
## Contributions
249
+
## Roadmap & Contributions
250
+
Our team is actively working on:
251
+
1. Human-in-the-Loop Functionalities: Supporting user participation in the knowledge curation process.
252
+
2. Information Abstraction: Developing abstractions for curated information to support presentation formats beyond the Wikipedia-style report.
253
+
228
254
If you have any questions or suggestions, please feel free to open an issue or pull request. We welcome contributions to improve the system and the codebase!
0 commit comments