我一直在研究手势识别模型,我一直在更新、降级和调试很多模块,然后出现了这个错误:
Warning (from warnings module):
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\tensorflow_addons\utils\tfa_eol_msg.py", line 23
warnings.warn(
UserWarning:
TensorFlow Addons (TFA) has ended development and introduction of new features.
TFA has entered a minimal maintenance and release mode until a planned end of life in May 2024.
Please modify downstream libraries to take dependencies from other repositories in our TensorFlow community (e.g. Keras, Keras-CV, and Keras-NLP).
For more information see: https://github.com/tensorflow/addons/issues/2807
Traceback (most recent call last):
File "C:\Users\LENOVO\Desktop\project1.py", line 27, in <module>
data = gesture_recognizer.Dataset.from_folder(
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\mediapipe_model_maker\python\vision\gesture_recognizer\dataset.py", line 202, in from_folder
hand_data = _get_hand_data(
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\mediapipe_model_maker\python\vision\gesture_recognizer\dataset.py", line 114, in _get_hand_data
with _HandLandmarker.create_from_options(
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\mediapipe\tasks\python\vision\hand_landmarker.py", line 271, in create_from_options
return cls(
File "C:\Users\LENOVO\AppData\Local\Programs\Python\Python310\lib\site-packages\mediapipe\tasks\python\vision\core\base_vision_task_api.py", line 65, in __init__
self._runner = TaskRunner.create(graph_config, packet_callback)
RuntimeError: File loading is not yet supported on Windows
mediapipe 模型制作器 0.1.0.2 张量流2.14.0 蟒蛇3.10.0
这是我的原始代码(只有有问题的部分):
from mediapipe_model_maker.python.vision import gesture_recognizer
data = gesture_recognizer.Dataset.from_folder(
dirname=IMAGES_PATH,
hparams=gesture_recognizer.HandDataPreprocessingParams()
)
# Split the archive into training, validation and test dataset.
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
# Train the model
hparams = gesture_recognizer.HParams(export_dir="rock_paper_scissors_model")
options = gesture_recognizer.GestureRecognizerOptions(hparams=hparams)
model = gesture_recognizer.GestureRecognizer.create(
train_data=train_data,
validation_data=validation_data,
options=options
)
print("done")
# Load the rock-paper-scissor image archive.