14.9 C
London
Monday, September 9, 2024

video – iOS 17.0 – Report stay digicam feed together with overlay layers


I’m constructing an iOS object detection app and up to now so good. Can see the detected objects on a separate layer that’s added on high of the VideoPreviewLayer.

 func startVideo() {
        videoCapture = VideoCapture()
        videoCapture.delegate = self
        videoCapture.setUp(sessionPreset: .photograph) { success in
            // .hd4K3840x2160 or .photograph (4032x3024)  Warning: 4k might not work on all gadgets i.e. 2019 iPod
            if success {
                // Add the video preview into the UI.
                if let previewLayer = self.videoCapture.previewLayer {
                    self.view!.layer.addSublayer(previewLayer)
                    self.videoCapture.previewLayer?.body = self.view!.bounds  // resize preview layer
                }
                
                // Add the bounding field layers to the UI, on high of the video preview.
                for field in self.boundingBoxViews {
                    field.addToLayer(self.view!.layer)
                }
                
                // As soon as every little thing is about up, we will begin capturing stay video.
                self.videoCapture.begin()
            }
        }
    }

Nevertheless, I need to document the display picture when a selected object seems on the display. Fairly simple I assumed, evaluate the detected object class and document the UIView.
This does not appear to work.

func snapScreen() {
    let bounds = UIScreen.predominant.bounds
    //UIGraphicsBeginImageContextWithOptions(bounds.measurement, false, 0.0)
    UIGraphicsBeginImageContextWithOptions(bounds.measurement, false, 0.0)
    let context = UIGraphicsGetCurrentContext()
    //self.view!.drawHierarchy(in: bounds, afterScreenUpdates: true)
    
    self.view!.layer.render(in: context!)
    let img = UIGraphicsGetImageFromCurrentImageContext()
    saveScreenImage(picture: img!)
    UIGraphicsEndImageContext()
}

Am triggering this simply after the bounding bins are added to the previewLayer. The videoPreviewLayer will not be captured. Solely the boundingBoxLayer is captured.

The CALayer.render(in: Context) message says all layers and sublayers will likely be rendered within the context.

Okay since this did not work I assumed the videoPreviewLayer is lacking within the subLayers so I iterated by way of all of the sublayers and that does not appear to work both.

 func snapScreen() {
        let bounds = UIScreen.predominant.bounds
        //UIGraphicsBeginImageContextWithOptions(bounds.measurement, false, 0.0)
        UIGraphicsBeginImageContextWithOptions(bounds.measurement, false, 0.0)
        let context = UIGraphicsGetCurrentContext()
        //self.view!.drawHierarchy(in: bounds, afterScreenUpdates: true)
        for layer in self.view!.layer.sublayers! {
            layer.render(in: context!)
        }
        let img = UIGraphicsGetImageFromCurrentImageContext()
        saveScreenImage(picture: img!)
        UIGraphicsEndImageContext()
    }

I assumed I ought to seize the picture within the Seize AVCapturePhotoCaptureDelegate and add the layer on high of the picture.

  1. This does not work both solely the previewLayer is captured.
  2. That is gradual due to the overhead of drawing a picture and as soon as once more solely the previewLayer is saved not the picture.
extension CameraViewController: AVCapturePhotoCaptureDelegate {
    func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photograph: AVCapturePhoto, error: Error?) {
        if let error = error {
            print("error occurred : (error.localizedDescription)")
        }
        DispatchQueue.predominant.async {
            if let dataImage = photograph.fileDataRepresentation() {
                print(UIImage(knowledge: dataImage)?.measurement as Any)
                let dataProvider = CGDataProvider(knowledge: dataImage as CFData)
                let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent)
                let picture = UIImage(cgImage: cgImageRef, scale: 0.5, orientation: UIImage.Orientation.proper)
                let bounds = UIScreen.predominant.bounds
                //UIGraphicsBeginImageContextWithOptions(bounds.measurement, false, 0.0)
                UIGraphicsBeginImageContextWithOptions(bounds.measurement, false, 0.0)
                let context = UIGraphicsGetCurrentContext()
                UIGraphicsPushContext(context!)
                //self.view!.drawHierarchy(in: bounds, afterScreenUpdates: true)
                picture.draw(at: CGPoint(x: 0, y: 0))
                UIGraphicsPopContext()
                context?.saveGState()
                self.view!.layer.render(in: context!)
                context?.restoreGState()
                let newImg = UIGraphicsGetImageFromCurrentImageContext()
                UIImageWriteToSavedPhotosAlbum(newImg!, nil, nil, nil);
                UIGraphicsEndImageContext()
                // Save to digicam roll
                
            } else {
                print("AVCapturePhotoCaptureDelegate Error")
            }
        }
        
    }
}

What I ideally need
enter image description here

What I get within the CapturePhotoDelegate
enter image description here

What I get in snapScreen
enter image description here

If somebody has an concept what I’m doing flawed please let me know. The one facet I could also be lacking is that in snapScreen() I’m not accessing the picture buffer because it’s already loaded within the view. I could also be flawed there.

Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here