Home System.AccessViolationException on VideoCapture.Retrieve() - EmguCV / OpenCV
Reply: 0

System.AccessViolationException on VideoCapture.Retrieve() - EmguCV / OpenCV

user4587
1#
user4587 Published in April 24, 2018, 6:29 am

im new to open cv and i am trying around with the face recognition features, but ran in trouble at one point. Calling the Retrieve()-Method on a VideoCapture-Object throws an System.AccessViolationException in one of three cases. I found many topics about this issue, but no solutions.

This is the StackTrace I get:

bei Emgu.CV.CvInvoke.cveVideoCaptureRetrieve(IntPtr capture, IntPtr image, Int32 flag)

bei Emgu.CV.VideoCapture.Retrieve(IOutputArray image, Int32 channel)

bei OpenCVGenericAssembly.OpenCVGenericAssembly.Capture(String sensorSerialNo, FeatureType feature, Boolean compressed, Int32 timeOut) in C:\Users\sl\Documents\Source\Share\OpenCVGenericAssembly\OpenCVGenericAssembly\OpenCVGenericAssembly.cs:Zeile 81.

bei OpenCVGenericAssembly.OpenCVGenericAssembly.Enroll(String sensorSerialNo, String firstName, String lastName, String company, FeatureType feature, Int32 templateDestination, Boolean compressed, Int32 timeOut, String connectionString, String templateQuery) in C:\Users\sl\Documents\Source\Share\OpenCVGenericAssembly\OpenCVGenericAssembly\OpenCVGenericAssembly.cs:Zeile 125.

bei Testing.Program.Main(String[] args) in C:\Users\sl\Documents\Source\Share\OpenCVGenericAssembly\Testing\Program.cs:Zeile 20.

I am calling an Enroll Method, which does nothing else but call a Capture-Method and wait for its response. The Capture-Method shall run, until it detects exactly one face, which will be returned then. That's what the Capture-Method does look like:

public DResponse Capture(string sensorSerialNo, FeatureType feature, bool compressed = false, int timeOut = 0)
{
    capture = new VideoCapture(); 
    DResponse rsp = DResponse();

    while(string.IsNullOrWhiteSpace(rsp.templateData))
    {
        using (Mat mat = new Mat())
        {
            capture.Retrieve(mat);
            Image<Bgr, Byte> currentFrame = mat.ToImage<Bgr, Byte>();

            if (currentFrame != null)
            {
                Image<Gray, Byte> grayFrame = currentFrame.Convert<Gray, Byte>();

                Rectangle[] detectedFaces = cascadeClassifier.DetectMultiScale(grayFrame, DMS_SCALE_FACTORS, DMS_MIN_NEIGHBORS);

                if (detectedFaces.Length == 1)
                {
                    Image<Gray, byte> result = currentFrame.Copy(detectedFaces[0]).Convert<Gray, byte>().Resize(IMG_WIDTH, IMG_HEIGHT, Emgu.CV.CvEnum.Inter.Cubic);
                    result._EqualizeHist();
                    rsp.templateData = Convert.ToBase64String(result.Bytes);
                    break;
                }
                Thread.Sleep(100);
            }
        }
    }

    return rsp;
}

I tried a tutorial on this first. It's an wpf-app showing the Video-Stream and a frame around detected faces (plus a name, if a person is recognized). It's quite similar, but they use a DispatcherTimer in the Tutorial which i can't use because my code should be used as an assembly. Anyway, this code doesn't throw this error, so maybe this will help someone to capture the problem in my source above.

private void Window_Loaded(object sender, RoutedEventArgs e)
{
    capture = new VideoCapture();
    haarCascade = new CascadeClassifier(System.AppDomain.CurrentDomain.BaseDirectory + "haarcascade_frontalface_alt_tree.xml");
    timer = new DispatcherTimer();
    timer.Tick += new EventHandler(timer_Tick);
    timer.Interval = new TimeSpan(0, 0, 0, 0, 1);
    timer.Start();
}

void timer_Tick(object sender, EventArgs e)
{
    Mat mat = new Mat();
    capture.Retrieve(mat);
    Image<Bgr, Byte> currentFrame = mat.ToImage<Bgr, Byte>();

    if (currentFrame != null)
    {
        Image<Gray, Byte> grayFrame = currentFrame.Convert<Gray, Byte>();
        Rectangle[] detectedFaces = haarCascade.DetectMultiScale(grayFrame, 1.1, 1);

        for (int i = 0; i < detectedFaces.Length; i++)
        {
            result = currentFrame.Copy(detectedFaces[i]).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.Inter.Cubic);
            result._EqualizeHist();

            currentFrame.Draw(detectedFaces[i], new Bgr(System.Drawing.Color.Green), 3);

            if (eigenRecog.IsTrained)
            {
                // do some stuff
            }
        }

        image1.Source = ToBitmapSource(currentFrame);
    }
}

Any hints? Any questions? I am thankful for every input! stl

You need to login account before you can post.

About| Privacy statement| Terms of Service| Advertising| Contact us| Help| Sitemap|
Processed in 0.343364 second(s) , Gzip On .

© 2016 Powered by mzan.com design MATCHINFO