Releases: h2oai/h2o-llmstudio
Releases · h2oai/h2o-llmstudio
v0.2.1
What's Changed
- Fix: Load default user settings by @maxjeblick in #419
- Refactor: conversation chaining by @maxjeblick in #386
- Fix: perplexity metric in RLHF by @pascal-pfeiffer in #410
- Fix: HF push cpu_shard, soften value check by @pascal-pfeiffer in #413
Full Changelog: v0.2.0...v0.2.1
v0.2.0
What's Changed
- [DOCS] Document how to install custom packages by @sherenem in #405
- [DevOps] Update LLM packer scripts by @DinukaH2O in #406
- Auto lora layers by @psinger in #408
- add dataset import from azure by @pascal-pfeiffer in #407
- Packages and Lora fixes by @psinger in #411
- Enhancements to Secret Management by @maxjeblick in #364
Full Changelog: v0.1.1...v0.2.0
v0.1.1
What's Changed
- Contribution guide by @maxjeblick in #329
- Fix no gpu experiments by @maxjeblick in #360
- Fix markdown plotting by @maxjeblick in #371
- clean space for docker build by @pascal-pfeiffer in #375
- ignore warnings for loading config yaml by @maxjeblick in #373
- Hotfix filterwarnings by @maxjeblick in #376
- only kill parent process when failing in ddp mode by @pascal-pfeiffer in #378
- Utility function to use the CLI interface to publish models to Hugging Face by @diegomiranda02 in #300
- Fixes seq2seq Perplexity by @psinger in #385
- Update FAQ answer by @sherenem in #388
- [DevOps] Update packer scripts for policy requirements by @DinukaH2O in #391
- add integration tests for all problem types by @pascal-pfeiffer in #380
- H2O_WAVE_APP_ADDRESS=http://127.0.0.1:8756 to unblock port 8000 by @pascal-pfeiffer in #394
- heap (default off) by @pascal-pfeiffer in #389
- Readme custom packages by @psinger in #395
- v0.1.1 release version by @pascal-pfeiffer in #401
New Contributors
- @diegomiranda02 made their first contribution in #300
- @DinukaH2O made their first contribution in #391
Full Changelog: v0.1.0...v0.1.1
v0.1.0
What's Changed
- New Experiment Summary page by @fatihozturkh2o in #316
- cuda118 pytorch by @pascal-pfeiffer in #325
- HF friendly repo name by @fatihozturkh2o in #322
- Max/insights table view by @maxjeblick in #301
- embed new video into videos section for LLM studio by @shaunyogeshwaran in #332
- Update build-and-push-release.yml by @pascal-pfeiffer in #334
- Sequence to Sequence Problem Type by @psinger in #308
- upd packages by @pascal-pfeiffer in #341
- saving hf yaml separately by @fatihozturkh2o in #342
- Fixing generation code by @psinger in #348
- Remove pickle support by @maxjeblick in #351
- Fix get_size_str by @maxjeblick in #353
- add-CPU-RAM-requirements-docs by @shaunyogeshwaran in #344
- Fix tests and test workflow by @maxjeblick in #355
- Refactor rlhf by @maxjeblick in #328
- llama2 7b with int4 quantization as default by @pascal-pfeiffer in #367
New Contributors
- @shaunyogeshwaran made their first contribution in #332
Full Changelog: v0.0.6...v0.1.0
v0.0.6
What's Changed
- delete redundant code by @maxjeblick in #271
- Token IDs (again) by @psinger in #265
- end user shall use "make llmstudio" to run without wave checking for file changes by @pascal-pfeiffer in #269
- Fix neptune version by @psinger in #273
- Minor model template changes by @psinger in #274
- Minor doc improvements by @sherenem in #261
- dont compute np.mean(val_losses) twice by @Quetzalcohuatl in #275
- account correctly for day of year by @pascal-pfeiffer in #277
- Package upgrades and minor changes by @psinger in #291
- Removal of llama force slow by @psinger in #292
- Add data sanity checks by @maxjeblick in #289
- LLama2 config fix by @psinger in #294
- Set huggingface token when loading tokenizers by @maxjeblick in #296
- Use unk token for pad if available by @psinger in #298
- Refactor problem type by @maxjeblick in #293
- Add required URLs to LLM Studio installation guide by @sherenem in #310
- Show git version in error tab by @maxjeblick in #306
- Fix Runpod support / fix copy_config to use output dir config (#280) by @Glavin001 in #281
New Contributors
- @Quetzalcohuatl made their first contribution in #275
Full Changelog: v0.0.5...v0.0.6
v0.0.5
What's Changed
- export cache dir to save downloaded backbone. by @osiire in #212
- Generation config fix by @psinger in #248
- add missing cfg by @pascal-pfeiffer in #252
- Customize data_folder & output_folder with environment variables (#216) by @Glavin001 in #217
- Add missing content to the Experiments section by @oshi98 in #249
- Improve sampling for chained samples by @psinger in #254
- Fix limit_chained_samples by @maxjeblick in #256
- Hotfix to avoid deletion of datasets by @maxjeblick in #262
- System prompt by @psinger in #250
- update transformers and other dependencies by @pascal-pfeiffer in #266
- add: safe serialization while pushing to hf by @shivance in #221
- introduce "H2O_LLM_STUDIO_WORKDIR" by @pascal-pfeiffer in #268
New Contributors
- @osiire made their first contribution in #212
- @Glavin001 made their first contribution in #217
- @shivance made their first contribution in #221
Full Changelog: v0.0.4...v0.0.5
v0.0.4
What's Changed
- Default settings adjustment by @psinger in #154
- Manual loss & chained setting by @psinger in #142
- support azure endpoints for openai api by @pascal-pfeiffer in #147
- update transformers to 4.30.1 by @maxjeblick in #158
- fix for additional pretraining and more detailed FAQ by @pascal-pfeiffer in #155
- switch back to default chat behaviour by @pascal-pfeiffer in #159
- enable train progress bar logging for sub 1 epoch evals by @pascal-pfeiffer in #162
- added perplexity as metric by @pascal-pfeiffer in #157
- personalize chatbot by @pascal-pfeiffer in #161
- hotfix by @pascal-pfeiffer in #164
- fix for custom chatbot_name and chatbot_author by @pascal-pfeiffer in #165
- Add documentation about updating H2O LLM Studio by @maxjeblick in #169
- Pp/rlhf by @pascal-pfeiffer in #152
- Pp/typing by @pascal-pfeiffer in #173
- Allow to change branch for downloading huggingface models by @haqishen in #168
- Use huggingface auth token for downloading models by @maxjeblick in #167
- Dataset refactoring by @maxjeblick in #177
- fix style command by @maxjeblick in #178
- Fix auth token for falcon models by @maxjeblick in #181
- Package update by @psinger in #176
- Run llmstudio docker image as llmstudio user instead of root user by @tomkraljevic in #163
- Dtype & HF Push Changes by @psinger in #182
- Use cache changes by @psinger in #188
- Add falcon peft target modules by @maxjeblick in #166
- Def loss in dictionary by @psinger in #193
- GPT metric endpoint by @psinger in #195
- Refactoring by @maxjeblick in #192
- Make gpu id for chat configurable by @maxjeblick in #184
- Update wave by @maxjeblick in #198
- Update README.md by @psinger in #199
- Push official docs site for LLM Studio by @sherenem in #156
- Fix documentation base URL by @sherenem in #201
- Diverse adjustments by @psinger in #207
- Fix link to documentation in README by @tmm1 in #202
- rlhf batches by @pascal-pfeiffer in #197
- Add doc issue template by @sherenem in #204
- check available disk space by @haqishen in #194
- Wave streaming by @maxjeblick in #205
- Checkpoint changes by @psinger in #222
- add torch uint8 for disk check by @haqishen in #224
- Fix tokenizer length by @psinger in #231
- Fix chat bug by @maxjeblick in #230
- cast to float before converting to numpy by @pascal-pfeiffer in #234
- make lora slider max values adjustable by @pascal-pfeiffer in #236
- Bitsandbytes fixes by @psinger in #226
- version bump to 0.0.4 and default batch_size 2 by @pascal-pfeiffer in #225
New Contributors
- @haqishen made their first contribution in #168
- @tomkraljevic made their first contribution in #163
- @tmm1 made their first contribution in #202
Full Changelog: v0.0.3...v0.0.4
v0.0.3
What's Changed
- update readme to include support for installing nvidia drivers if req… by @jfarland in #114
- chat bubbles switched, user on right side by @pascal-pfeiffer in #121
- [DevOps] Fix Snyk project issue by @ChathurindaRanasinghe in #115
- FAQ by @pascal-pfeiffer in #122
- replace(".", "-") by @pascal-pfeiffer in #124
- Fix local download of model by @maxjeblick in #126
- [DevOps] Fix condition issue for snyk test & snyk monitor by @ChathurindaRanasinghe in #127
- fix bug when loading from config and remote code by @pascal-pfeiffer in #132
- Max/4bit by @maxjeblick in #131
- model card by @pascal-pfeiffer in #138
- GPT eval progress by @pascal-pfeiffer in #139
- Proper DDP Eval Progress by @psinger in #141
New Contributors
Full Changelog: v0.0.2...v0.0.3
v0.0.2
What's Changed
- Github action for docker build and push to vovran by @lakinduakash in #86
- Tokenize rework by @psinger in #90
- [DevOps] Jenkins pipeline for creating cloud images. by @ChathurindaRanasinghe in #106
- [DevOps] Snyk Integration by @ChathurindaRanasinghe in #109
- HF Push and Download improvements by @psinger in #110
- Max/chat block by @maxjeblick in #85
- update version by @maxjeblick in #113
New Contributors
- @lakinduakash made their first contribution in #86
Full Changelog: v0.0.1...v0.0.2
v0.0.1
What's Changed
- Readme Quickstart by @psinger in #3
- Set evaluate_before_training at beginning if epochs==0 by @maxjeblick in #2
- GUI Icon & Readme Logo by @psinger in #11
- Max/remove train flag by @maxjeblick in #9
- Add default values and stop tokens by @psinger in #14
- Upload Tokenizer to HF by @psinger in #19
- Update README.md by @RichardScottOZ in #21
- Update README.md by @RichardScottOZ in #22
- Minor adjustments by @psinger in #20
- Update README.md by @eltociear in #26
- Update README.md by @pascal-pfeiffer in #34
- Update README.md by @pascal-pfeiffer in #35
- update pipfile by @maxjeblick in #33
- Default dataset by @psinger in #37
- Tokenizer rework by @psinger in #32
- Chained conversation support by @psinger in #40
- Batch implementation for stop tokens by @maxjeblick in #42
- Fix Neptune by @psinger in #48
- download model button by @pascal-pfeiffer in #44
- Save/Load config as yaml by @maxjeblick in #12
- introducing busy_dialog for waiting operations by @fatihozturkh2o in #49
- Filter out empty stop tokens by @maxjeblick in #54
- Fix copy config by @maxjeblick in #52
- HF push fixes by @psinger in #57
- Fix pipfile.lock by @psinger in #64
- update pipfile by @maxjeblick in #66
- clarification regarding CLI example by @pascal-pfeiffer in #53
- Update readme by @maxjeblick in #62
- Config fixes by @psinger in #68
- Closes #70 by @psinger in #72
- Discord link by @psinger in #74
- hf push by @pascal-pfeiffer in #59
- tooltips by @pascal-pfeiffer in #50
- Update README.md by @pascal-pfeiffer in #77
- gh actions test by @pascal-pfeiffer in #47
- Add Dockerfile. by @arnocandel in #29
- Improve gpt metric calculation by @psinger in #80
- add requirements.txt by @maxjeblick in #67
- default gpt eval max by @pascal-pfeiffer in #81
- backbones by @pascal-pfeiffer in #92
New Contributors
- @RichardScottOZ made their first contribution in #21
- @eltociear made their first contribution in #26
- @fatihozturkh2o made their first contribution in #49
- @arnocandel made their first contribution in #29
Full Changelog: https://github.com/h2oai/h2o-llmstudio/commits/v0.0.1