We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
INFO:root:Setting batch size to 38, learning rate to 0.0003082207001484488. (14GB GPU memory free) INFO:root:Loading model... INFO:root:Loaded model INFO:root:Loading data... 56 train files, 14 test files --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-17-9bb687479d52>](https://localhost:8080/#) in <cell line: 9>() 7 symbols = load_symbols(os.path.join(alphabet_directory, alphabet.value)) if alphabet.value else DEFAULT_ALPHABET 8 checkpoint_path = os.path.join(checkpoint_directory, dataset.value, checkpoint.value) if checkpoint.value else None ----> 9 train( 10 metadata_path=metadata, 11 dataset_directory=wavs, 3 frames [/content/Voice-Cloning-App/training/tacotron2_model/stft.py](https://localhost:8080/#) in __init__(self, filter_length, hop_length, win_length, window) 67 # get window and zero center pad it to filter_length 68 fft_window = get_window(window, win_length, fftbins=True) ---> 69 fft_window = pad_center(fft_window, filter_length) 70 fft_window = torch.from_numpy(fft_window).float() 71 TypeError: pad_center() takes 1 positional argument but 2 were given
The text was updated successfully, but these errors were encountered:
same here. don´t know what to do since local always said CUDA memory error. Remote is the way to go
Sorry, something went wrong.
Having the same issue here, any fix?
Change librosa to version 0.8.1 that has the right signature for pad_center I guess:
http://librosa.org/doc-playground/0.8.1/generated/librosa.util.pad_center.html
No branches or pull requests
The text was updated successfully, but these errors were encountered: