我如何克服“拒绝权限...使用ACTION_OPEN_DOCUMENT或相关API获得访问权限?”

问题描述 投票:0回答:1

我正在使用react-native-firebasereact-native-document-picker,但我正在尝试遵循face detection tutorial

尽管已通过PermissionsAndroid进行了读取访问,但当前出现以下错误:

Permission Denial: reading com.android.provides.media.MediaDocumentsProvider uri [uri] from pid=4746, uid=10135 requires that you obtain access using ACTION_OPEN_DOCUMENT or related APIs

我能够在屏幕上显示用户所选择的图像,但是react-native-firebase功能似乎没有权限。该调用发生错误:const faces = await vision().faceDetectorProcessImage(localPath);

关于如何授予人脸检测功能或我在做什么错的任何建议?

我的AndroidManifest.xml文件包含以下内容:

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

这里是该组件中的所有代码供参考:

import React, {useState} from 'react';
import { Button, Text, Image, PermissionsAndroid } from 'react-native';
import vision, { VisionFaceContourType } from '@react-native-firebase/ml-vision';
import DocumentPicker from 'react-native-document-picker';



async function processFaces(localPath) {

  console.log(localPath)
  const faces = await vision().faceDetectorProcessImage(localPath);
  console.log("Got faces")

  faces.forEach(face => {
    console.log('Head rotation on Y axis: ', face.headEulerAngleY);
    console.log('Head rotation on Z axis: ', face.headEulerAngleZ);

    console.log('Left eye open probability: ', face.leftEyeOpenProbability);
    console.log('Right eye open probability: ', face.rightEyeOpenProbability);
    console.log('Smiling probability: ', face.smilingProbability);

    face.faceContours.forEach(contour => {
      if (contour.type === VisionFaceContourType.FACE) {
        console.log('Face outline points: ', contour.points);
      }
    });
  });
}

async function pickFile () {
    // Pick a single file
    try {
        const res = await DocumentPicker.pick({
            type: [DocumentPicker.types.images],
        });
        console.log(
            res.uri,
            res.type, // mime type
            res.name,
            res.size
        );
        return res
    } catch (err) {
        if (DocumentPicker.isCancel(err)) {
        // User cancelled the picker, exit any dialogs or menus and move on
            console.log("User cancelled")
        } else {
            console.log("Error picking file or processing faces")
            throw err;
        }
    }
}

const requestPermission = async () => {
    try {
      const granted = await PermissionsAndroid.request(
        PermissionsAndroid.PERMISSIONS.READ_EXTERNAL_STORAGE,
        {
          title: "Files Permission",
          message:
            "App needs access to your files " +
            "so you can run face detection.",
          buttonNeutral: "Ask Me Later",
          buttonNegative: "Cancel",
          buttonPositive: "OK"
        }
      );
      if (granted === PermissionsAndroid.RESULTS.GRANTED) {
        console.log("We can now read files");
      } else {
        console.log("File read permission denied");
      }
      return granted
    } catch (err) {
      console.warn(err);
    }
  };

function FaceDetectionScreen ({navigation}) {
    const [image, setImage] = useState("");
    return (
        <>
            <Text>This is the Face detection screen.</Text>
            <Button title="Select Image to detect faces" onPress={async () => {
                const permission = await requestPermission();
                if (permission === PermissionsAndroid.RESULTS.GRANTED) {
                    const pickedImage = await pickFile();
                    const pickedImageUri = pickedImage.uri
                    setImage(pickedImageUri);
                    processFaces(pickedImageUri).then(() => console.log('Finished processing file.'));
                }
                }}/>
            <Image style={{flex: 1}} source={{ uri: image}}/>
        </>
    ); 
}

export default FaceDetectionScreen;
react-native react-native-android react-native-firebase
1个回答
0
投票

由于this comment on a github issue,我能够通过将processFaces的前三行更新为:来更新我的代码并使它正常工作:

async function processFaces(contentUri) {
  const stat = await RNFetchBlob.fs.stat(contentUri)
  const faces = await vision().faceDetectorProcessImage(stat.path);

导入import RNFetchBlob from 'rn-fetch-blob'之后。

rn-fetch-blob

© www.soinside.com 2019 - 2024. All rights reserved.