Spaces:
Running
on
L40S
I still can't start app.py
I've tried various methods, but I still get an error,
I've also changed line 21 of the SMPLer-X/app.py file to my environment but it still doesn't work. I don't know how to toss it~
I've also tried doing SMPLer-X/app.py
It prompts me to say that my python environment os package doesn't have mkdir, haha, I think it's weird. I'm using miniconda3, does that mean I want to execute it with root privileges? I kind of don't quite understand this.
Hope to get help
and the error is as follows:
10-17 21:16:31 Creating graph...
Loads checkpoint by local backend from path: /home/feng/TANGO/SMPLer-X/main/../pretrained_models/mmdet/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth
0%| | 0/1 [00:22<?, ?it/s]
Traceback (most recent call last):
File "/home/feng/TANGO/SMPLer-X/app.py", line 133, in
infer(os.path.join(video_folder, video_input), 0.5, False, False, inferer, OUT_FOLDER)
File "/home/feng/TANGO/SMPLer-X/app.py", line 103, in infer
_, , _ = inferer.infer(original_img, in_threshold, frame, multi_person, not(render_mesh))
File "/home/feng/TANGO/SMPLer-X/main/inference.py", line 55, in infer
mmdet_results = inference_detector(self.model, original_img)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmdet/apis/inference.py", line 189, in inference_detector
results = model.test_step(data)[0]
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmengine/model/base_model/base_model.py", line 145, in test_step
return self._run_forward(data, mode='predict') # type: ignore
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmengine/model/base_model/base_model.py", line 361, in _run_forward
results = self(**data, mode=mode)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmdet/models/detectors/base.py", line 94, in forward
return self.predict(inputs, data_samples)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmdet/models/detectors/two_stage.py", line 227, in predict
x = self.extract_feat(batch_inputs)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmdet/models/detectors/two_stage.py", line 112, in extract_feat
x = self.neck(x)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmdet/models/necks/fpn.py", line 195, in forward
outs = [
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmdet/models/necks/fpn.py", line 196, in
self.fpn_convsi for i in range(used_backbone_levels)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/mmcv/cnn/bricks/conv_module.py", line 281, in forward
x = self.conv(x)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
cd ./SMPLer-X/ && python app.py --video_folder_path ../outputs/tmpvideo/ --data_save_path ../outputs/tmpdata/ --json_save_path ../outputs/save_video.json && cd ..
Traceback (most recent call last):
File "/home/feng/TANGO/./create_graph.py", line 477, in
graph = create_graph(json_path, smplx_model)
File "/home/feng/TANGO/./create_graph.py", line 129, in create_graph
data_meta = json.load(open(json_path, "r"))
FileNotFoundError: [Errno 2] No such file or directory: './outputs/save_video.json'
Some weights of the model checkpoint at facebook/wav2vec2-base-960h were not used when initializing Wav2Vec2Model: ['lm_head.bias', 'lm_head.weight']
- This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of the model checkpoint at facebook/wav2vec2-base-960h were not used when initializing Wav2Vec2Model: ['lm_head.bias', 'lm_head.weight'] - This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of the model checkpoint at facebook/wav2vec2-base-960h were not used when initializing Wav2Vec2Model: ['lm_head.bias', 'lm_head.weight'] - This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of the model checkpoint at facebook/wav2vec2-base-960h were not used when initializing Wav2Vec2Model: ['lm_head.bias', 'lm_head.weight'] - This IS expected if you are initializing Wav2Vec2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of Wav2Vec2Model were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
File "/home/feng/TANGO/app.py", line 767, in
demo = make_demo()
File "/home/feng/TANGO/app.py", line 752, in make_demo
gr.Examples(
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/gradio/helpers.py", line 81, in create_examples
examples_obj.create()
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/gradio/helpers.py", line 340, in create
self._start_caching()
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/gradio/helpers.py", line 391, in _start_caching
client_utils.synchronize_async(self.cache)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/gradio_client/utils.py", line 855, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/gradio/helpers.py", line 517, in cache
prediction = await Context.root_block.process_api(
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 943, in run
result = context.run(func, *args)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/home/feng/TANGO/app.py", line 592, in tango
result = test_fn(model, device, 0, cfg.data.test_meta_paths, test_path, cfg, audio_path, create_graph=create_graph)
File "/home/feng/TANGO/app.py", line 186, in test_fn
graph = igraph.Graph.Read_Pickle(fname=pool_path)
File "/home/feng/miniconda3/envs/tang/lib/python3.9/site-packages/igraph/io/files.py", line 223, in _construct_graph_from_pickle_file
raise IOError(
OSError: Cannot load file. If fname is a file name, that filename may be incorrect.
have the same error.
have the same error.