Implementing edge detection in Flutter

Implementing edge detection in Flutter

Note: Substantial parts of the code described here have been created during my working relationship with Truststone Software GmbH.

Edge detection is a very common task in various app types. I had assumed that there must be an easy solution in Flutter to perform that common task. However, at the time of writing (2020-09-07), this is not the case. The solutions I found were:

  • edge_detection package: sounds promising from the description, but has at least one major flaw: it doesn’t provide an API that expects an image and returns the detected edges. Instead, it leads the user to a native screen in which he can perform predefined actions. This makes it completely useless for most of the use cases at least in the business context because it leaves the developer unable to define the layout and design of the screen. ****Apps are mostly branded and require individual screen designs
  • opencv package: the idea of having OpenCV bindings available in the Flutter context is appealing. However, iOS is unsupported which also makes it unusable for professional apps. Also, one would have to translate existing examples on the web written in C++, Java or Python into the dart commands that also have differing data types. Another problem is that not every command seems to be translated
  • Using platform channel to call native code. This would lead to a lot of code duplication between the Android and iOS part

This tutorial describes how to implement an app that lets you use the phone’s camera to take a picture or choose one from the gallery. It then displays the detected edges on this picture. The app will be fully functional on iOS and Android and will only have one code base.

Let’s describe the properties and features of the app we want to implement

  • A screen with a camera preview that has two buttons: take a photo and choose in image from the gallery
  • A screen that appears when an image was taken or chosen that displays the image and renders the detected edges on top
  • There is a button that lets the user return from the edge screen to the main screen
  • The UI must not be blocked during edge detection. Instead, the image is shown already and the edges appear once the process is finished

Concept

Because we want the detection to happen in native C++ in order to have a single code base, we need to make use of the Dart foreign function interface or dart:ffi.

Flutter edge detection call chain
The call chain

The call chain happens as follows: in our main widget, we give the user the ability to choose a photo. The path of the photo is being forwarded to our C/C++ Bridge. This is the last class that actually contains Dart code. It converts the given string (image path) to a format our C-Code understands. It also loads the shared library we will create from our C++ source files. There the image is being loaded from file system to check if exists. If it does, the Mat (native data type of OpenCV) is passed to the respective OpenCV functions that perform the actual edge detection. The list of list of Point is then passed back to the entry point of our native code, which converts it to a DetectionResult which is essentially a C++ struct with four doubles (representing the relative position of the four edges). The C/C++ bridge puts that information into a Dart type (EdgeDetectionResult) and passed it back to our widget where this piece of information is used to render the edges on top of our image.

Implementation

We are going to split this part of the tutorial in two sub parts: project setup, native implementation / ffi, Flutter widget implementation. Let’s start with the project setup.

Project setup

We want this to be a package to be able to integrate it in any Flutter project we want. That’s why we start by generating the project as follows:

1flutter create --org dev.flutterclutter --template=plugin --platforms=android,ios simple_edge_detection

More information about this can be found here. We want to develop the plugin for both iOS and android, that’s a necessary argument.

Download OpenCV

Flutter download OpenCV
We need it for Android and iOS

We then head over to the OpenCV releases and download the iOS pack which comes in form of a framework and the Android code.

Next, we need to extract the archives and copy the contents as follows into our project directory:

1cp -R sdk/native/jni/include project_root
2cp sdk/native/libs/* project_root/android/src/main/jniLibs/*

Where project_root is the directory called simple_edge_detection which resides at the location where we executed the flutter create command.

Why exactly these directories, you might ask yourself. When librarie are placed in the jniLibs folder, they are automatically included during build. It’s also possible to override this behavior by editing the build.gradle.

For iOS, we also need to copy the sources:

1cp -R opencv2.framework project_root/ios

Our project directory now looks like this:

 1native_opencv
 2  - android
 3    - src
 4      - main
 5        - jniLibs
 6          - arm64-v8a
 7            - libopencv_java4.so
 8          - armeabi-v7a
 9            - libopencv_java4.so
10          - x86
11            - libopencv_java4.so
12          - x86_64
13            - libopencv_java4.so
14  - include
15    - opencv2
16      - ...

Build setup Android

No we need to tell Gradle to build the app in the way that the C++ code we are going to write is accessible from the Dart context as a library.

 1android {
 2    compileSdkVersion 28
 3
 4    sourceSets {
 5        main.java.srcDirs += 'src/main/kotlin'
 6    }
 7    defaultConfig {
 8        minSdkVersion 16
 9    }
10    lintOptions {
11        disable 'InvalidPackage'
12    }
13    externalNativeBuild {
14        cmake {
15            path "CMakeLists.txt"
16        }
17    }
18    defaultConfig {
19        externalNativeBuild {
20            cmake {
21                cppFlags '-frtti -fexceptions -std=c++11'
22                arguments "-DANDROID_STL=c++_shared"
23            }
24        }
25    }
26}

The most import part is within externalNativeBuild because it tells Gradle where the CMakeLists is located. This file has build instructions that link our code with OpenCV.

The cmake flags enable Run-Time Type Information, exceptions and tells it to compile the program with C++11. The argument -DANDROID_STL=c++_shared is responsible for enabling the usage of shared C++ runtime in shared libraries.

Now we need to create the CMakeLists.txt we have just referenced.

 1cmake_minimum_required(VERSION 3.6.0)
 2include_directories(../include)
 3add_library(lib_opencv SHARED IMPORTED)
 4set_target_properties(lib_opencv PROPERTIES IMPORTED_LOCATION ${CMAKE_CURRENT_SOURCE_DIR}/src/main/jniLibs/${ANDROID_ABI}/libopencv_java4.so)
 5set(EDGE_DETECTION_DIR "../ios/Classes")
 6set(SOURCES
 7    ${EDGE_DETECTION_DIR}/native_edge_detection.cpp
 8    ${EDGE_DETECTION_DIR}/edge_detector.cpp
 9)
10add_library(native_edge_detection SHARED ${SOURCES})
11target_link_libraries(native_edge_detection lib_opencv)

We mainly link our sources with the OpenCV library here and make it a library that can be called from our Dart code.

You might stumble across set(EDGE_DETECTION_DIR "../ios/Classes") and ask yourself why the ios directory in the cmake file for the Android targets. We need to put our C/C++ sources there because we’ll be using CocoaPods (the packet manager of the iOS App ecosystem). CocoaPods can only reference source code from directories at the same level of the .podspec file or below. Looks a little bit hacky because the code base is used for both platforms but the Android build system is a little bit more tolerant in this regard and we can reference the sources within the ios folder from here.

There are two sources: native_edge_detection.cpp and edge_detector.cpp. These are the names our C++ sources will have. native_edge_detection.cpp is going to be our entry point from FFI and edge_detector.cpp will contain the actual logic of the edge detection and call OpenCV.

Build setup iOS

Now that we are finished with setting up the build system for Android, let’s continue with iOS before we actually write the native code. This requires a few changes in the .simple_edge_detection.podspec file.

 1#
 2# To learn more about a Podspec see http://guides.cocoapods.org/syntax/podspec.html.
 3# Run `pod lib lint simple_edge_detection.podspec' to validate before publishing.
 4#
 5Pod::Spec.new do |s|
 6  s.name             = 'simple_edge_detection'
 7  s.version          = '0.0.1'
 8  s.summary          = 'A new flutter plugin project.'
 9  s.description      = <<-DESC
10A new flutter plugin project.
11                       DESC
12  s.homepage         = 'http://example.com'
13  s.license          = { :file => '../LICENSE' }
14  s.author           = { 'Your Company' => 'email@example.com' }
15  s.source           = { :path => '.' }
16  s.source_files = 'Classes/**/*.{swift,c,m,h,mm,cpp,plist}'
17  s.dependency 'Flutter'
18  s.platform = :ios, '8.0'
19
20  # Flutter.framework does not contain a i386 slice. Only x86_64 simulators are supported.
21  s.pod_target_xcconfig = { 'DEFINES_MODULE' => 'YES', 'VALID_ARCHS[sdk=iphonesimulator*]' => 'x86_64' }
22  s.swift_version = '5.0'
23
24  s.preserve_paths = 'opencv2.framework'
25  s.xcconfig = { 'OTHER_LDFLAGS' => '-framework opencv2' }
26  s.vendored_frameworks = 'opencv2.framework'
27  s.frameworks = 'AVFoundation'
28  s.library = 'c++'
29end

This is what your .podspec file should look like. Everything below s.swift_version='5.0' was added by us.

We tell CocoaPods not to remove opencv in the build process by using s.preserve_paths. We then instruct the linker to include OpenCV2 which it can find because we placed the framework in the ios folder earlier. OpenCV2 needs AVFoundation, which is the camera abstraction on iOS. That’s because the video camera interface is basically just a wrapper around that.

One important notice: the line

1s.source_files = 'Classes/**/*<em>/</em>.{swift,c,m,h,mm,cpp,plist}

was changed to

1s.source_files = Classes/**/*<em>/</em>.{swift,c,m,h,mm,cpp,plist}

That’s because otherwise the header files will be imported twice (as there is no file filter) which prevents the build process to work properly.

Implementing native edge detection using C++

Now we are done with the setup. We proceed by implementing the actual native code both of the platforms will be sharing.

We start with the native implementation by creating the entry point called native_edge_detection.

 1struct Coordinate
 2{
 3    double x;
 4    double y;
 5};
 6
 7struct DetectionResult
 8{
 9    Coordinate* topLeft;
10    Coordinate* topRight;
11    Coordinate* bottomLeft;
12    Coordinate* bottomRight;
13};
14
15extern "C"
16struct DetectionResult *detect_edges(char *str);

We define two structs that are used to represent the result of our process. Coordinate is just a Point with relative X and Y coordinates. DetectionResults has four of these types as properties. They will represent the four corners of our edge detection result. Our class will only have one public method called detect_edges that receives a string representing the path to our image file and returns a DetectionResult.

The FFI library is able call C symbols but we are writing C++ code. That’s why we mark these symbols extern C.

 1#include "native_edge_detection.hpp"
 2#include "edge_detector.hpp"
 3#include <stdlib.h>
 4#include <opencv2/opencv.hpp>
 5
 6extern "C" __attribute__((visibility("default"))) __attribute__((used))
 7struct Coordinate *create_coordinate(double x, double y)
 8{
 9    struct Coordinate *coordinate = (struct Coordinate *)malloc(sizeof(struct Coordinate));
10    coordinate->x = x;
11    coordinate->y = y;
12    return coordinate;
13}
14
15extern "C" __attribute__((visibility("default"))) __attribute__((used))
16struct DetectionResult *create_detection_result(Coordinate *topLeft, Coordinate *topRight, Coordinate *bottomLeft, Coordinate *bottomRight)
17{
18    struct DetectionResult *detectionResult = (struct DetectionResult *)malloc(sizeof(struct DetectionResult));
19    detectionResult->topLeft = topLeft;
20    detectionResult->topRight = topRight;
21    detectionResult->bottomLeft = bottomLeft;
22    detectionResult->bottomRight = bottomRight;
23    return detectionResult;
24}
25
26extern "C" __attribute__((visibility("default"))) __attribute__((used))
27struct DetectionResult *detect_edges(char *str) {
28    struct DetectionResult *coordinate = (struct DetectionResult *)malloc(sizeof(struct DetectionResult));
29    cv::Mat mat = cv::imread(str);
30
31    if (mat.size().width == 0 || mat.size().height == 0) {
32        return create_detection_result(
33            create_coordinate(0, 0),
34            create_coordinate(1, 0),
35            create_coordinate(0, 1),
36            create_coordinate(1, 1)
37        );
38    }
39
40    vector<cv::Point> points = EdgeDetector::detect_edges(mat);
41
42    return create_detection_result(
43        create_coordinate((double)points[0].x / mat.size().width, (double)points[0].y / mat.size().height),
44        create_coordinate((double)points[1].x / mat.size().width, (double)points[1].y / mat.size().height),
45        create_coordinate((double)points[2].x / mat.size().width, (double)points[2].y / mat.size().height),
46        create_coordinate((double)points[3].x / mat.size().width, (double)points[3].y / mat.size().height)
47    );
48}

The strange looking visibility attributes prevent the linker from discarding the symbols during link-time optimization which make them callable from Dart.

The first thing we do is reading the image using cv::imread() if the size and with are 0 meaning there is no valid image at this path, we return a DetectionResult that spans over the whole image dimensions (remember, the coordinates are relative).

Otherwise, we forward the image as a Mat to the heart of our detection called EdgeDetector. The resulting points are absolute so we divide them by width and height of our image to ensure they are relative. That will make our life easier later on in the Flutter widget because if the displayed image is smaller than the original size of the image (which can happen if at least one side exceeds the dimensions of the screen), we don’t need to calculate the relative scaling factor.

Now let’s have a look at our EdgeDetector. I will not explain every line of code in detail because this is still a Flutter tutorial and the C++ algorithm to detect the edges is only exemplary.

 1#include <opencv2/opencv.hpp>
 2
 3using namespace cv;
 4using namespace std;
 5
 6
 7class EdgeDetector {
 8    public:
 9    static vector<cv::Point> detect_edges( Mat& image);
10    static Mat debug_squares( Mat image );
11    
12    private:
13    static double get_cosine_angle_between_vectors( cv::Point pt1, cv::Point pt2, cv::Point pt0 );
14    static vector<vector<cv::Point> > find_squares(Mat& image);
15    static float get_width(vector<cv::Point>& square);
16    static float get_height(vector<cv::Point>& square);
17};

The only public methods we provide are detect_edges which is the method we have just called in the other class. I used debug_squares during development to debug the output when I had strange results. This method paints the detected edges onto the original image and returns it. If you experience unexpected results like too small or too big rectangles, you might use this method in order to find out what’s going on.

  1#include "edge_detector.hpp"
  2
  3#include <opencv2/opencv.hpp>
  4#include <opencv2/imgproc/types_c.h>
  5
  6using namespace cv;
  7using namespace std;
  8
  9// helper function:
 10// finds a cosine of angle between vectors
 11// from pt0->pt1 and from pt0->pt2
 12double EdgeDetector::get_cosine_angle_between_vectors(cv::Point pt1, cv::Point pt2, cv::Point pt0)
 13{
 14    double dx1 = pt1.x - pt0.x;
 15    double dy1 = pt1.y - pt0.y;
 16    double dx2 = pt2.x - pt0.x;
 17    double dy2 = pt2.y - pt0.y;
 18    return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
 19}
 20
 21vector<cv::Point> image_to_vector(Mat& image)
 22{
 23    int imageWidth = image.size().width;
 24    int imageHeight = image.size().height;
 25
 26    return {
 27        cv::Point(0, 0),
 28        cv::Point(imageWidth, 0),
 29        cv::Point(0, imageHeight),
 30        cv::Point(imageWidth, imageHeight)
 31    };
 32}
 33
 34vector<cv::Point> EdgeDetector::detect_edges(Mat& image)
 35{
 36    vector<vector<cv::Point>> squares = find_squares(image);
 37    vector<cv::Point>* biggestSquare = NULL;
 38
 39    // Sort so that the points are ordered clockwise
 40
 41    struct sortY {
 42        bool operator() (cv::Point pt1, cv::Point pt2) { return (pt1.y < pt2.y);}
 43    } orderRectangleY;
 44    struct sortX {
 45        bool operator() (cv::Point pt1, cv::Point pt2) { return (pt1.x < pt2.x);}
 46    } orderRectangleX;
 47
 48    for (int i = 0; i < squares.size(); i++) {
 49        vector<cv::Point>* currentSquare = &squares[i];
 50
 51        std::sort(currentSquare->begin(),currentSquare->end(), orderRectangleY);
 52        std::sort(currentSquare->begin(),currentSquare->begin()+2, orderRectangleX);
 53        std::sort(currentSquare->begin()+2,currentSquare->end(), orderRectangleX);
 54
 55        float currentSquareWidth = get_width(*currentSquare);
 56        float currentSquareHeight = get_height(*currentSquare);
 57
 58        if (currentSquareWidth < image.size().width / 5 || currentSquareHeight < image.size().height / 5) {
 59            continue;
 60        }
 61
 62        if (currentSquareWidth > image.size().width * 0.99 || currentSquareHeight > image.size().height * 0.99) {
 63            continue;
 64        }
 65
 66        if (biggestSquare == NULL) {
 67            biggestSquare = currentSquare;
 68            continue;
 69        }
 70
 71        float biggestSquareWidth = get_width(*biggestSquare);
 72        float biggestSquareHeight = get_height(*biggestSquare);
 73
 74        if (currentSquareWidth * currentSquareHeight >= biggestSquareWidth * biggestSquareHeight) {
 75            biggestSquare = currentSquare;
 76        }
 77
 78    }
 79
 80    if (biggestSquare == NULL) {
 81        return image_to_vector(image);
 82    }
 83
 84    std::sort(biggestSquare->begin(),biggestSquare->end(), orderRectangleY);
 85    std::sort(biggestSquare->begin(),biggestSquare->begin()+2, orderRectangleX);
 86    std::sort(biggestSquare->begin()+2,biggestSquare->end(), orderRectangleX);
 87
 88    return *biggestSquare;
 89}
 90
 91float EdgeDetector::get_height(vector<cv::Point>& square) {
 92    float upperLeftToLowerRight = square[3].y - square[0].y;
 93    float upperRightToLowerLeft = square[1].y - square[2].y;
 94
 95    return max(upperLeftToLowerRight, upperRightToLowerLeft);
 96}
 97
 98float EdgeDetector::get_width(vector<cv::Point>& square) {
 99    float upperLeftToLowerRight = square[3].x - square[0].x;
100    float upperRightToLowerLeft = square[1].x - square[2].x;
101
102    return max(upperLeftToLowerRight, upperRightToLowerLeft);
103}
104
105cv::Mat EdgeDetector::debug_squares( cv::Mat image )
106{
107    vector<vector<cv::Point> > squares = find_squares(image);
108
109    for (const auto & square : squares) {
110        // draw rotated rect
111        cv::RotatedRect minRect = minAreaRect(cv::Mat(square));
112        cv::Point2f rect_points[4];
113        minRect.points( rect_points );
114        for ( int j = 0; j < 4; j++ ) {
115            cv::line( image, rect_points[j], rect_points[(j+1)%4], cv::Scalar(0,0,255), 1, 8 ); // blue
116        }
117    }
118
119    return image;
120}
121
122vector<vector<cv::Point> > EdgeDetector::find_squares(Mat& image)
123{
124    vector<int> usedThresholdLevel;
125    vector<vector<Point> > squares;
126
127    Mat gray0(image.size(), CV_8U), gray;
128
129    cvtColor(image , gray, COLOR_BGR2GRAY);
130    medianBlur(gray, gray, 3);      // blur will enhance edge detection
131    vector<vector<cv::Point> > contours;
132
133    int thresholdLevels[] = {10, 30, 50, 70};
134    for(int thresholdLevel : thresholdLevels) {
135        Canny(gray, gray0, thresholdLevel, thresholdLevel*3, 3); // max thres: 100  // *3 => recommended setting
136
137        // Dilate helps to remove potential holes between edge segments
138        dilate(gray0, gray0, Mat(), Point(-1, -1));
139
140        // Find contours and store them in a list
141        findContours(gray0, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
142
143        // Test contours
144        vector<Point> approx;
145        for (const auto & contour : contours) {
146            // approximate contour with accuracy proportional
147            // to the contour perimeter
148            approxPolyDP(Mat(contour), approx, arcLength(Mat(contour), true) * 0.02, true);
149
150            // Note: absolute value of an area is used because
151            // area may be positive or negative - in accordance with the
152            // contour orientation
153            if (approx.size() == 4 && fabs(contourArea(Mat(approx))) > 1000 &&
154                isContourConvex(Mat(approx))) {
155                double maxCosine = 0;
156
157                for (int j = 2; j < 5; j++) {
158                    double cosine = fabs(get_cosine_angle_between_vectors(approx[j % 4], approx[j - 2], approx[j - 1]));
159                    maxCosine = MAX(maxCosine, cosine);
160                }
161
162                if (maxCosine < 0.3) {
163                    squares.push_back(approx);
164                    usedThresholdLevel.push_back(thresholdLevel);
165                }
166            }
167        }
168    }
169
170    return squares;
171}

This is a lot of code, but the relevant part is basically taken from Stackoverflow. I enhanced it by ruling out too small detection results and ordering the rectangles so that their points are always returned in a clockwise matter starting from the top left point.

We store the above classes as edge_detector.hpp and edge_detector.cpp respectively.

There’s only one thing left to do before we are done with the package and can continue with actually calling our edge detection from a Flutter widget: we need to write the class that makes use of the ffi package to call the library we have just implemented. We call this bridge class edge_detection.dart, put it in the lib folder as the only file of our plugin and make it look like this:

 1import 'dart:async';
 2import 'dart:ffi';
 3import 'dart:io';
 4import 'dart:ui';
 5import 'package:ffi/ffi.dart';
 6import 'package:flutter/material.dart';
 7
 8
 9class Coordinate extends Struct {
10  @Double()
11  double x;
12
13  @Double()
14  double y;
15
16  factory Coordinate.allocate(double x, double y) =>
17      allocate<Coordinate>().ref
18        ..x = x
19        ..y = y;
20}
21
22class NativeDetectionResult extends Struct {
23  Pointer<Coordinate> topLeft;
24  Pointer<Coordinate> topRight;
25  Pointer<Coordinate> bottomLeft;
26  Pointer<Coordinate> bottomRight;
27
28  factory NativeDetectionResult.allocate(
29      Pointer<Coordinate> topLeft,
30      Pointer<Coordinate> topRight,
31      Pointer<Coordinate> bottomLeft,
32      Pointer<Coordinate> bottomRight) =>
33      allocate<NativeDetectionResult>().ref
34        ..topLeft = topLeft
35        ..topRight = topRight
36        ..bottomLeft = bottomLeft
37        ..bottomRight = bottomRight;
38}
39
40class EdgeDetectionResult {
41  EdgeDetectionResult({
42    @required this.topLeft,
43    @required this.topRight,
44    @required this.bottomLeft,
45    @required this.bottomRight,
46  });
47
48  Offset topLeft;
49  Offset topRight;
50  Offset bottomLeft;
51  Offset bottomRight;
52}
53
54typedef DetectEdgesFunction = Pointer<NativeDetectionResult> Function(
55  Pointer<Utf8> x
56);
57
58class EdgeDetection {
59  static Future<EdgeDetectionResult> detectEdges(String path) async {
60    DynamicLibrary nativeEdgeDetection = _getDynamicLibrary();
61
62    final detectEdges = nativeEdgeDetection
63        .lookup<NativeFunction<DetectEdgesFunction>>("detect_edges")
64        .asFunction<DetectEdgesFunction>();
65
66    NativeDetectionResult detectionResult = detectEdges(Utf8.toUtf8(path)).ref;
67
68    return EdgeDetectionResult(
69        topLeft: Offset(
70            detectionResult.topLeft.ref.x, detectionResult.topLeft.ref.y
71        ),
72        topRight: Offset(
73            detectionResult.topRight.ref.x, detectionResult.topRight.ref.y
74        ),
75        bottomLeft: Offset(
76            detectionResult.bottomLeft.ref.x, detectionResult.bottomLeft.ref.y
77        ),
78        bottomRight: Offset(
79            detectionResult.bottomRight.ref.x, detectionResult.bottomRight.ref.y
80        )
81    );
82  }
83
84  static DynamicLibrary _getDynamicLibrary() {
85    final DynamicLibrary nativeEdgeDetection = Platform.isAndroid
86        ? DynamicLibrary.open("libnative_edge_detection.so")
87        : DynamicLibrary.process();
88    return nativeEdgeDetection;
89  }
90}

The structs we defined in the C/C++ code also need to be defined here because we have to determine the return value of the method we are calling. That’s why we create classes that extend the Struct class. If you want to know more about how to exchange data via ffi, have a look at this official example.

We also create a class called EdgeDetectionResult that holds essentially the same information as our DetectionResult class but with well-known data types of the Flutter world. We represent the points using the Offset class.

Because C does not know strings, we need to use a char pointer (Pointer) as the input parameter for our native script. We use DynamicLibrary to call our native library. This is always the name of our entry class (native_edge_detection), prefixed by lib and with so as file ending. For more information on how to call native libraries have a look at the official docs.

We used the ffi package in the above class so we need to import that package into our project by editing the pubspec.yml

1dependencies:
2  flutter:
3    sdk: flutter
4  ffi: ^0.1.3

Implementing the Flutter widgets

Now we are done with all the prerequisites. We implemented a package (plugin) we can use in any Flutter project to detect edges on a given image. Let’s use that to implement the goal we described at the beginning of this article.

For that, we create a new project edge_detection_sample that resides in the same directory as our package.

In the pubspec.yml, we import that local package like this:

1dependencies:
2  flutter:
3    sdk: flutter
4  camera: ^0.5.8+5
5  path_provider: ^1.6.14
6  image_picker: ^0.6.7+7
7
8  simple_edge_detection:
9    path: ../simple_edge_detection/

As you can see, we also need the camera, path_provider and image_picker. That’s because we we want to let the app detect the edges of either the camera or an image from the gallery.

Let’s start with a widget called Scan. This is the place where the user should be able to scan e. g. a sheet of paper either from the camera or from the gallery.

  1class Scan extends StatefulWidget {
  2  @override
  3  _ScanState createState() => _ScanState();
  4}
  5
  6class _ScanState extends State<Scan> {
  7  CameraController controller;
  8  List<CameraDescription> cameras;
  9  String imagePath;
 10  EdgeDetectionResult edgeDetectionResult;
 11
 12  @override
 13  void initState() {
 14    super.initState();
 15    checkForCameras().then((value) {
 16      _initializeController();
 17    });
 18  }
 19
 20  @override
 21  Widget build(BuildContext context) {
 22    return Scaffold(
 23      body: Stack(
 24        children: <Widget>[
 25          _getMainWidget(),
 26          _getBottomBar(),
 27        ],
 28      ),
 29    );
 30  }
 31
 32  Widget _getMainWidget() {
 33    if (imagePath == null && edgeDetectionResult == null) {
 34      return CameraView(
 35        controller: controller
 36      );
 37    }
 38
 39    return EdgeDetectionPreview(
 40      imagePath: imagePath,
 41      edgeDetectionResult: edgeDetectionResult,
 42    );
 43  }
 44
 45  Future<void> checkForCameras() async {
 46    cameras = await availableCameras();
 47  }
 48
 49  void _initializeController() {
 50    if (cameras.length == 0) {
 51      log('No cameras detected');
 52      return;
 53    }
 54
 55    controller = CameraController(
 56        cameras[0],
 57        ResolutionPreset.max,
 58        enableAudio: false
 59    );
 60    controller.initialize().then((_) {
 61      if (!mounted) {
 62        return;
 63      }
 64      setState(() {});
 65    });
 66  }
 67
 68  @override
 69  void dispose() {
 70    controller?.dispose();
 71    super.dispose();
 72  }
 73
 74  Widget _getButtonRow() {
 75    if (imagePath != null) {
 76      return Align(
 77        alignment: Alignment.bottomCenter,
 78        child: FloatingActionButton(
 79          foregroundColor: Colors.white,
 80          child: Icon(Icons.arrow_back),
 81          onPressed: () {
 82            setState(() {
 83              edgeDetectionResult = null;
 84              imagePath = null;
 85            });
 86          },
 87        ),
 88      );
 89    }
 90
 91    return Row(
 92      mainAxisAlignment: MainAxisAlignment.center,
 93      children: [
 94        FloatingActionButton(
 95          foregroundColor: Colors.white,
 96          child: Icon(Icons.camera_alt),
 97          onPressed: onTakePictureButtonPressed,
 98        ),
 99        SizedBox(width: 16),
100        FloatingActionButton(
101          foregroundColor: Colors.white,
102          child: Icon(Icons.image),
103          onPressed: _onGalleryButtonPressed,
104        ),
105      ]
106    );
107  }
108
109  Padding _getBottomBar() {
110    return Padding(
111      padding: EdgeInsets.only(bottom: 32),
112      child: Align(
113        alignment: Alignment.bottomCenter,
114        child: _getButtonRow()
115      )
116    );
117  }
118}

We make it a stateful widget because we have certain variables here that can change during the lifetime of this widget that should be managed by the widget itself:

  • controller: The CameraController – this is necessary to take a picture when a button is pressed
  • cameras: The detected cameras on this device. Is being checked at the beginning and then used to initialize the controller
  • imagePath: The path leading to the current image
  • edgeDetectionResult: The detection result of the current image

The first thing during state initialization is checking for cameras. If none were found, nothing happens. Otherwise, the camera controller is being initialized with the first found camera which is usually the back facing camera of a smartphone.

Now the root widget inside of the Scaffold is a Stack. We want the camera preview to fill the whole screen if possible and then place the bottom bar with its buttons on top.

_getMainWidget() displays either the camera view when there is no imagePath and no edgeDetectionResult which means no picture has been taken yet or the edge detection preview meaning the image with the detected edges painted on top when both of the variables are set.

The CameraView widget and the EdgeDetectionPreview are yet only placeholders. Let’s implement them.

 1import 'package:camera/camera.dart';
 2import 'package:flutter/material.dart';
 3
 4class CameraView extends StatelessWidget {
 5  CameraView({
 6    this.controller
 7  });
 8
 9  final CameraController controller;
10
11  @override
12  Widget build(BuildContext context) {
13    return _getCameraPreview();
14  }
15  
16  Widget _getCameraPreview() {
17    if (controller == null || !controller.value.isInitialized) {
18      return Container();
19    }
20
21    return Center(
22      child: AspectRatio(
23        aspectRatio: controller.value.aspectRatio,
24        child: CameraPreview(controller)
25      )
26    );
27  }
28}

The display logic of the CameraView is fairly simple as it just uses the CameraController of the parent widget to display the CameraPreview. We use an AspectRatio widget in order to have it the same ratio as the stream of images coming from the camera.

The EdgeDetectionPreview has a little bit more code:

 1import 'dart:async';
 2import 'dart:io';
 3import 'dart:math';
 4import 'dart:typed_data';
 5import 'dart:ui' as ui;
 6import 'package:simple_edge_detection/edge_detection.dart';
 7import 'package:flutter/material.dart';
 8
 9class EdgeDetectionPreview extends StatefulWidget {
10  EdgeDetectionPreview({
11    this.imagePath,
12    this.edgeDetectionResult
13  });
14
15  final String imagePath;
16  final EdgeDetectionResult edgeDetectionResult;
17
18  @override
19  _EdgeDetectionPreviewState createState() => _EdgeDetectionPreviewState();
20}
21
22class _EdgeDetectionPreviewState extends State<EdgeDetectionPreview> {
23  GlobalKey imageWidgetKey = GlobalKey();
24
25  @override
26  Widget build(BuildContext mainContext) {
27    return Center(
28      child: Stack(
29        fit: StackFit.expand,
30        children: <Widget>[
31          Center(
32            child: Text('Loading ...')
33          ),
34          Image.file(
35            File(widget.imagePath),
36            fit: BoxFit.contain,
37            key: imageWidgetKey
38          ),
39          FutureBuilder<ui.Image>(
40            future: loadUiImage(widget.imagePath),
41            builder: (BuildContext context, AsyncSnapshot<ui.Image> snapshot) {
42              return _getEdgePaint(snapshot, context);
43            }
44          ),
45        ],
46      ),
47    );
48  }
49
50  Widget _getEdgePaint(AsyncSnapshot<ui.Image> imageSnapshot, BuildContext context) {
51    if (imageSnapshot.connectionState == ConnectionState.waiting)
52      return Container();
53
54    if (imageSnapshot.hasError)
55      return Text('Error: ${imageSnapshot.error}');
56
57    if (widget.edgeDetectionResult == null)
58      return Container();
59
60    final keyContext = imageWidgetKey.currentContext;
61
62    if (keyContext == null) {
63      return Container();
64    }
65
66    final box = keyContext.findRenderObject() as RenderBox;
67
68    return CustomPaint(
69        size: Size(box.size.width, box.size.height),
70        painter: EdgePainter(
71          topLeft: widget.edgeDetectionResult.topLeft,
72          topRight: widget.edgeDetectionResult.topRight,
73          bottomLeft: widget.edgeDetectionResult.bottomLeft,
74          bottomRight: widget.edgeDetectionResult.bottomRight,
75          image: imageSnapshot.data,
76          color: Theme.of(context).accentColor
77        )
78    );
79  }
80
81  Future<ui.Image> loadUiImage(String imageAssetPath) async {
82    final Uint8List data = await File(imageAssetPath).readAsBytes();
83    final Completer<ui.Image> completer = Completer();
84    ui.decodeImageFromList(Uint8List.view(data.buffer), (ui.Image image) {
85      return completer.complete(image);
86    });
87    return completer.future;
88  }
89}

Instead of directly painting the image at the bottom layer of the stack and some edges on top, we do something different: we load the image asynchronously and then use a FutureBuilder to display the image. We do that because we need the dimensions of the scaled image as well as the original dimensions of the image. This way, when we draw the edges on top, we can draw it according to the current scale.

The future which the FutureBuilder is based on, is the Completer of decoding the image that as been read from the given path.

Another advantage is that we can show a preloader in every phase: before the image in the stack is displayed, we show a text saying “Loading …”. While the asynchronous process of loading the image is happening, we show the image. And finally, we paint the edges on top. That does not leave the user with a blank screen until everything is ready.

It’s important that we choose fit: BoxFit.contain to display the image. This way, the image will always fill width or height depending on its aspect ratio. Otherwise, smaller image may appear tiny in the center. In combination with fit: StackFit.expand this leads to the image filling all the available space.

Another significant part is using GlobalKey imageWidgetKey. This way we can reuse the size of the actual rendered Image widget then painting the edges on top. For more information on how to get the size of a widget in the context of another widget, please refer to the official docs.

Now let’s have a look at the CustomPainter being responsible for drawing the image and the edges:

 1class EdgePainter extends CustomPainter {
 2  EdgePainter({
 3    this.topLeft,
 4    this.topRight,
 5    this.bottomLeft,
 6    this.bottomRight,
 7    this.image,
 8    this.color
 9  });
10
11  Offset topLeft;
12  Offset topRight;
13  Offset bottomLeft;
14  Offset bottomRight;
15
16  ui.Image image;
17  Color color;
18
19  @override
20  void paint(Canvas canvas, Size size) {
21    double top = 0.0;
22    double left = 0.0;
23
24
25    double renderedImageHeight = size.height;
26    double renderedImageWidth = size.width;
27
28    double widthFactor = size.width / image.width;
29    double heightFactor = size.height / image.height;
30    double sizeFactor = min(widthFactor, heightFactor);
31
32    renderedImageHeight = image.height * sizeFactor;
33    top = ((size.height - renderedImageHeight) / 2);
34
35    renderedImageWidth = image.width * sizeFactor;
36    left = ((size.width - renderedImageWidth) / 2);
37
38
39    final points = [
40      Offset(left + topLeft.dx * renderedImageWidth, top + topLeft.dy * renderedImageHeight),
41      Offset(left + topRight.dx * renderedImageWidth, top + topRight.dy * renderedImageHeight),
42      Offset(left + bottomRight.dx * renderedImageWidth, top + (bottomRight.dy * renderedImageHeight)),
43      Offset(left + bottomLeft.dx * renderedImageWidth, top + bottomLeft.dy * renderedImageHeight),
44      Offset(left + topLeft.dx * renderedImageWidth, top + topLeft.dy * renderedImageHeight),
45    ];
46
47    final paint = Paint()
48      ..color = color.withOpacity(0.5)
49      ..strokeWidth = 2
50      ..strokeCap = StrokeCap.round;
51
52    canvas.drawPoints(ui.PointMode.polygon, points, paint);
53
54    for (Offset point in points) {
55      canvas.drawCircle(point, 10, paint);
56    }
57  }
58
59  @override
60  bool shouldRepaint(CustomPainter old) {
61    return true;
62  }
63}

The crucial part is where we determine the renderedImageWidth and renderedImageHeight. Because we make use of BoxFit.contain, we know that the image does not fill the whole screen. Instead, the longest side fits into the screen making it possible for bars to appear along the shorter side. If we don’t do anything, the Painter will draw the edges based on the assumption that the aspect ratio of the rendered image is equal to the one from the original image. We calculate the factor to adjust the dimensions of the painted edges.

Okay now we have a Screen that displays the camera preview image and two buttons, but yet, the buttons do not trigger anything. Let’s change that.

 1Future _detectEdges(String filePath) async {
 2  if (!mounted || filePath == null) {
 3    return;
 4  }
 5
 6  setState(() {
 7    imagePath = filePath;
 8  });
 9
10  EdgeDetectionResult result = await EdgeDetector().detectEdges(filePath);
11
12  setState(() {
13    edgeDetectionResult = result;
14  });
15}
16
17void onTakePictureButtonPressed() async {
18  String filePath = await takePicture();
19
20  log('Picture saved to $filePath');
21
22  await _detectEdges(filePath);
23}
24
25void _onGalleryButtonPressed() async {
26  final picker = ImagePicker();
27  final pickedFile = await picker.getImage(source: ImageSource.gallery);
28  final filePath = pickedFile.path;
29
30  log('Picture saved to $filePath');
31
32  _detectEdges(filePath);
33}

If we have a filePath which means that an image was either taken by the camera or chosen from the gallery, we start the edge detection. The result is then set to the member variable of our widget.

This _detectEdges() method is called both from the callback of the camera button and from the gallery button as soon as the filePath is obtained.

Now what’s missing is the EdgeDetector. This class is responsible for calling the class EdgeDetection from our package that forwards the call to the native code. We need to take care of the UI not being blocked during that call.

 1import 'dart:async';
 2import 'dart:isolate';
 3
 4import 'package:simple_edge_detection/edge_detection.dart';
 5
 6class EdgeDetector {
 7  static Future<void> startEdgeDetectionIsolate(EdgeDetectionInput edgeDetectionInput) async {
 8    EdgeDetectionResult result = await EdgeDetection.detectEdges(edgeDetectionInput.inputPath);
 9    edgeDetectionInput.sendPort.send(result);
10  }
11
12  Future<EdgeDetectionResult> detectEdges(String filePath) async {
13    // Creating a port for communication with isolate and arguments for entry point
14    final port = ReceivePort();
15
16    // Spawning an isolate
17    Isolate.spawn<EdgeDetectionInput>(
18      startEdgeDetectionIsolate,
19      EdgeDetectionInput(
20        inputPath: filePath,
21        sendPort: port.sendPort
22      ),
23      onError: port.sendPort,
24      onExit: port.sendPort
25    );
26
27    // Making a variable to store a subscription in
28    StreamSubscription sub;
29
30    // Listening for messages on port
31
32    var completer = new Completer<EdgeDetectionResult>();
33
34    sub = port.listen((result) async {
35      // Cancel a subscription after message received called
36      await sub?.cancel();
37      completer.complete(await result);
38    });
39
40    return completer.future;
41  }
42}
43
44class EdgeDetectionInput {
45  EdgeDetectionInput({
46    this.inputPath,
47    this.sendPort
48  });
49
50  String inputPath;
51  SendPort sendPort;
52}

For the call to be non-blocking, it’s not sufficient to use Futures. That’s because a Future uses Dart’s event loop to schedule a task for some time in the future. However, if it’s very demanding computation, it’s not sufficient to do that because it shares resources with the other parts of the code. To start a part of the program that is highly independent, we need to go for Isolates.

The spawn() method of the isoalte expects a static method, an argument and a port on which the spawning component can receive errors or the information that the isolate has finished.

Since we can only supply a single argument, we need to wrap our input path and our sendPort in a class. The sendPort is very important as it’s the way of communicating from the isolate back to the caller. We need this to receive the result of our edge detection. When the isolate calls send on the send port, we receive and event on the receive port. That’s why we need to listen to it. We create a Future and by the time the listener on the port receives the result, we complete the future with the received data.

Result

Okay, we’re done. That’s what our final result looks like:

The result
The result

Pretty cool! There are still a lot of things to improve like resizing the image to a maximum size because of possible performance and memory issues. Or using the image stream from the camera to display a live preview of the edge detection. But for now, to show how things can be done, this is it.

An improvement was made afterwards: I added a magnifier that lets you easily position the touch bubbles without the finger covering the area that is crucial. The tutorial can be found here.

The full code can be found here:

GET FULL CODE

Note, that I could not include all the OpenCV builds into the GitHub repository. You need to perform the steps of downloading (like it’s mentioned under project setup) yourself.

Comments (84) ✍️

Đạt

This is a wonderful package. But i need image didn’t be converted to gray, how can i? I tried to turn off COLOR_BGR2GRAY in .cpp but it didn’t work
Reply to Đạt

Marc
In reply to Đạt's comment

Hey there! Thank you for your appreciation. In fact, you’ll also have to remove two lines below cvtColor: adaptiveThreshold. This makes the image a pure black and white image. The requirement is to have a grayscaled image which is what cvtColor takes care of. So just make the function look like this:

Mat ImageProcessor::process_image(Mat img, float x1, float y1, float x2, float y2, float x3, float y3, float x4, float y4) { Mat dst = ImageProcessor::crop_and_transform(img, x1, y1, x2, y2, x3, y3, x4, y4); return dst; }

Reply to Marc

Đạt
In reply to Marc's comment

Thank for your reply. I was found that after ask :)) but now i have trouble when build in Ios. I was setup for ios like tutorial said but error is : “2. Did not find header ‘opencv.hpp’ in framework ‘opencv2’ (loaded from ‘/Users/mac/Documents/projects/flutter/memori-mobile/ios/.symlinks/plugins/simple_edge_detection/ios’)

‘opencv2/opencv.hpp’ file not found” i tried to rename the package to simple_edge_detection and it still didn’t work. My application locate tree: memori_mobile: -fpo-simple-edge-detection: -your repo -lib: -my application my pubspec.yaml: simple_edge_detection: path: ./fpo-simple-edge-detection

Reply to Đạt

Marc
In reply to Đạt's comment

Try to create a new directory called “packages” under ‘/Users/mac/Documents/projects/flutter/memori-mobile/packages (assuming this is the root of your project). Then import the edge detection package inside the pubspec.yml of your project with

1simple_edge_detection:
2path: packages/simple-edge-detection

I hope this helps. I think iOS has problems building if the sources are outside of the project root.

ganesh
In reply to Marc's comment

Hi ,

I want to keep the image as it is . May be with color also . How i can achieve that ?

Reply to ganesh

Ganesh c kumar
In reply to ganesh's comment

in Android

Marc
In reply to Ganesh c kumar's comment

Hey there. You have to change the process_image function a little bit by removing two function calls - one for grayscaling and one for adaptive threshold.

Just make the function look like this:

1Mat ImageProcessor::process_image(Mat img, float x1, float y1, float x2, float y2, float x3, float y3, float x4, float y4) {
2Mat dst = ImageProcessor::crop_and_transform(img, x1, y1, x2, y2, x3, y3, x4, y4);
3return dst;
4}

Since this question seems to come up frequently, I will make a code change that enables you to choose whether to make it black and white.

Does that help?

Androidena

You can t add Android plugins to Flutter this way. You have to use Flutter plugins which is why you re getting the error Project with path :openCVLibrary343 could not be found in project :flutter_plugin . The post I linked this too not only says OpenCV is not currently available on Flutter but also explains how you can provide your own plugin following this.. Please take a look at it. SnakeyHips Nov 1 ‘18 at 16:22
Reply to Androidena

Marc
In reply to Androidena's comment

I’m sorry, I don’t get what you’re trying to say. I’m not getting an error. Also, I don’t add an Android Plugin. I am using the native C++ OpenCV Library. Can you elaborate what you’re trying to say, please? :)
Reply to Marc

Adarsh Hegde

does this also stretch out the cropped image to full width and height like how the edge_detection plugin did it?
Reply to Adarsh Hegde

Marc
In reply to Adarsh Hegde's comment

You mean skewing it so that even if you select a rotated polygon, it will be displayed as a non-rotated rectangle filling as much space as possible in the image preview? Then yes! :)
Reply to Marc

Prathmesh

typedef WarpFunc = void Function(ffi.Pointer x, ffi.Pointer y, double, double, double, double, double, double, double, double);

typedef _warp_func = ffi.Void Function(ffi.Pointer, ffi.Pointer, double, double, double, double, double, double, double, double);

class EdgeDetection {

static void warp(String inputPath, String outputPath, double co1, double co2, double co3, double co4, double co5, double co6, double co7, double co8) { ffi.DynamicLibrary nativeEdgeDetection = _getDynamicLibrary();

final warp = nativeEdgeDetection
    .lookup&lt;ffi.NativeFunction&gt;("warp")
    .asFunction();

warp(Utf8.toUtf8(inputPath), Utf8.toUtf8(outputPath), co1, co2, co3, co4,
    co5, co6, co7, co8);

}

static ffi.DynamicLibrary _getDynamicLibrary() { final ffi.DynamicLibrary nativeEdgeDetection = Platform.isAndroid ? ffi.DynamicLibrary.open(“libnative_edge_detection.so”) : ffi.DynamicLibrary.process(); return nativeEdgeDetection; } }

I did the above code for calling my c files in dart. But when i run my plugin in my app. It states

Error: Expected type ‘NativeFunction<Void Function(Pointer, Pointer, double, double, double, double, double, double, double, double)>’ to be a valid and instantiated subtype of ‘NativeType’. this error. Can you tell me what is wrong here.

Reply to Prathmesh

Marc
In reply to Prathmesh's comment

Hey there, Prathmesh I am currently on vacation. I will have a look at it as soon as I come back (which will be at the beginning of November). I hope you have the patience to wait :). Cheers!
Reply to Marc

Prathmesh
In reply to Marc's comment

Yes Surely. Thank you.
Reply to Prathmesh

Marc
In reply to Prathmesh's comment

The function you expect from the lookup function (_warp_func) needs to have native types. In your case, you used “double” which is a Dart type. In fact, you need to use “Double” (with capital “D”) because the native interface does not know about Dart types.

Please change the types in the _warp_func like this:

typedef _warp_func = ffi.Void Function(ffi.Pointer, ffi.Pointer, Double, Double, Double, Double, Double, Double, Double, Double);

and then also change the lookup to:

final warp = nativeEdgeDetection .lookup>("warp") .asFunction();

Hope that helps!

Reply to Marc

Chris von Wielligh

Hi Marc, in the template generation, there is no jniLibs/* folder under project_root/android/src/main/ . Is this something me need to create manually?
Reply to Chris von Wielligh

Marc
In reply to Chris von Wielligh's comment

Yes, that’s correct! You need to create this directory first if it does not exist in your project. I’m sorry if my explanation in the tutorial is not clear enough regarding that issue.
Reply to Marc

ZooL

Hi Marc, i was wondering if you have any plans to turn the edge detection part into a flutter plugin?
Reply to ZooL

Marc
In reply to ZooL's comment

Hey ZooL,

thank you for your suggestion! My original plan was to do exactly that. However, I had problems because of the file size of the dependencies like I pointed out here: https://github.com/flutter-clutter/flutter-simple-edge-detection/issues/9. But soon I will give it another try. If I succeeded, I will let you know :).

Reply to Marc

Farhan Shaikh
In reply to Marc's comment

Tutorial is very interesting, although I have not implemented it, I am very curious to understand what would be size of package. specially when OpenCV zip files for android and iOS is almost 500MB.
Reply to Farhan Shaikh

Marc
In reply to Farhan Shaikh's comment

If you choose to create an AAB (Android App Bundle) instead of an APK, you end up with an app size of about 30 MB on the device if I remember correctly. That’s because the libraries will only be included for the respective architecture and operating system.

Antoine

Hi !

I’ve tried to add it to my project, it works fine for Android but I can’t make it work on ios simulator.. I’ve tried everything I could find..

Here is the error : Could not find or use auto-linked library ‘swiftAVFoundation’ Undefined symbols for architecture x86_64: “_swift_FORCE_LOAD$_swiftAVFoundation”, referenced from: _swift_FORCE_LOAD$swiftAVFoundation$_opencv2 in opencv2(ByteVectorExt.o) _swift_FORCE_LOAD$swiftAVFoundation$_opencv2 in opencv2(DoubleVectorExt.o) _swift_FORCE_LOAD$swiftAVFoundation$_opencv2 in opencv2(FloatVectorExt.o) _swift_FORCE_LOAD$swiftAVFoundation$_opencv2 in opencv2(IntVectorExt.o) _swift_FORCE_LOAD$swiftAVFoundation$_opencv2 in opencv2(MatExt.o) _swift_FORCE_LOAD$swiftAVFoundation$_opencv2 in opencv2(CvTypeExt.o) (maybe you meant: _swift_FORCE_LOAD$swiftAVFoundation$_opencv2) ld: symbol(s) not found for architecture x86_64

I’ve read that I had to statically link the c++ sources files to the xcode project inside the Runner.xcworkspace but I have no idea how to do it properly, do you have any idea ?

My project tree is the following : mobileapp:

  • android
  • ios
  • lib
  • simple_edge_detection:
    • android
    • include
    • lib
    • ios:
      • Classes
      • opencv2.framework
Reply to Antoine

Marc
In reply to Antoine's comment

Hello Ancoine! That’s a strange issue. I suspect that there are some issues in the podspec file. Could you make a comparison between your podspec file and this one: https://github.com/flutter-clutter/flutter-simple-edge-detection/blob/master/ios/simple_edge_detection.podspec?
Reply to Marc

Antoine
In reply to Marc's comment

I have the exact same one The library works on x86_64 architecture, right ?

https://flutter.dev/docs/development/platform-integration/c-interop#step-2-add-cc-sources In this tutorial, do I need to perform this step ? How do I do that ?

On iOS, you need to tell Xcode to statically link the file: 1, In Xcode, open Runner.xcworkspace. 2, Add the C/C++/Objective-C/Swift source files to the Xcode project.

Or should I merge the simple_edge_detection “project” with my project at the root ?

Reply to Antoine

Marc
In reply to Antoine's comment

No, we’re dynamically linking it using the podspec file. That means this step shouldn’t be necessary. Just to make it clear: if you clone the Github project, follow the instructions and start the project, you get the error you mentioned?

Murad Kakabaev
In reply to Antoine's comment

Hi there, I had the same issue for OpenCV 4.5.0 while 4.4.0 works properly. Try 4.4.0 builds from the OpenCV site.
Reply to Murad Kakabaev

Antoine
In reply to Murad Kakabaev's comment

Thanks a lot, I tried with OpenCV 4.4.0 and it works !
Reply to Antoine

KA

Hi Marc,

Are you planning on using the image stream from the camera to display a live preview of the edge detection? How difficult do you think this will be and how long do you think it would take?

Reply to KA

Marc
In reply to KA's comment

Good question. Because of the YUV420 (YCbCr) to RGB conversion, the task is not that trivial, but it sounds like something I could add as a feature in the future! Thanks for the suggestions.
Reply to Marc

ZooL
In reply to Marc's comment

I am actually working on that atm, but as you said it is not trivial. performance so far is horrible and the image Stream gets in the way of taking an actual picture. i am wondering if it might be more efficient to write ones own camera preview and work with Texture, similar to how the original CameraPreview widget works.
Reply to ZooL

Thinh
In reply to ZooL's comment

I’m trying the same. Did you see any chance that it works?

Nemo
In reply to ZooL's comment

Using the startImageStream and converting in c++ each YUV420 frame into RGB and then to cv::Mat takes time then apply some filters adds time and overhead. how come the image stream is only in YUV420 and not in RGB format? Is there any workaround?

I would like to process about 10 frames per second with openCV (using of course ffi), is there any better approach?

Peter

I copied all the code from this page, and got it running on Android. However, somehow I always get four coordinates back: 0,0 0,1 1,0,1,1. So basically the whole screen I guess. I verified by adding some log messages that the image is properly read in C++. So the issue seems the detector. I basically have no idea where to look further. :)
Reply to Peter

Marc
In reply to Peter's comment

Hey Peter,

thank you for your interest in this tutorial. Does that happen to every image you try? The edge detection algorithm is not perfect. It works best when there is a high contrast between the shape that is to be detected and the background. If that doesn’t help, you might want to try to fork this: https://github.com/flutter-clutter/flutter-simple-edge-detection try if it works. If it does then compare it to your code or use the fork anyways.

Cheers!

Reply to Marc

Shubham
In reply to Marc's comment

Hi marc,

The Tutorial is interesting and is helping me with my application development in the flutter , as you said that edge detection algorithm is not perfect, so any idea how can we make it better or perfect? Also is it much difficult to do so .

Reply to Shubham

Marc
In reply to Shubham's comment

I’m glad my tutorial helped you. Well, to improve the actual detection, you would have to go down the C++ layer and tweak the values I put into the OpenCV library. I would suggest to do that outside of Dart and just experiment a little bit with isolated C++ code.

Dzhamil

Hey, Marc. Your tutorial is Awesome. But, I am confused. There is a class (class EdgePainter extends CustomPainter) in your tutorial. On the other hand, there is no https://github.com/flutter-clutter/flutter-simple-edge-detection. Could you tell me which is right? Regards.
Reply to Dzhamil

Marc
In reply to Dzhamil's comment

You should always use the one in the repository as I am sometimes updating stuff. In this commit https://github.com/flutter-clutter/flutter-simple-edge-detection/commit/a4228b3e96654f608d078adcb4a4e66ba478b360 I added the mentioned file.
Reply to Marc

Shubham

can’t create apk [!] Your app is using an unsupported Gradle project. To fix this problem, create a new project by running flutter create -t app and then move the dart code, assets and pubspec.yaml to the new project. is the error

after creating flutter create –org dev.flutterclutter –template=plugin –platforms=android,ios simple_edge_detection i get this above eoor

Reply to Shubham

Chris von Wielligh

Hi Marc, thanks for the great tutorial. I created a custom opencv library and it works perfectly on iOS. On Android, however, when I call the function, I get the error: Unhandled Exception: type ‘List’ is not a subtype of type ‘FutureOr?’
Reply to Chris von Wielligh

Marc
In reply to Chris von Wielligh's comment

Can you be more specific on where the error is thrown?
Reply to Marc

Aamil Silawat

Hello, I need one help from your side, I don’t want black and white effect after detecting edge, How can i remove it can you please help me out, It would be really helpful in my demo, Thanks for this
Reply to Aamil Silawat

Marc
In reply to Aamil Silawat's comment

Hey there. You might have a look at the comment of Đạt from 2020-09-30. I answered it there. If you’re still having problems, just let me know. Also answered here: https://github.com/flutter-clutter/flutter-simple-edge-detection/issues/7
Reply to Marc

Aamil Silawat

Hello Marc, Your Demo is good and I have implemented your demo in my project but the problem is, It is working well on release mode but when i create sign APK for android it is not working in sign APK. Please help me out of this issue i am very disappointed.
Reply to Aamil Silawat

Marc
In reply to Aamil Silawat's comment

What kind of error do you get? Can you be more specific about the symptoms?
Reply to Marc

Aamil Silawat
In reply to Marc's comment

No error performing there, Just Edge not detecting in SignAPK in android
Reply to Aamil Silawat

Sunny Bamaniya
In reply to Marc's comment

I’m also getting same kind of error. I’m unable to scan documents after creating a sign APK.
Reply to Sunny Bamaniya

Steve
In reply to Sunny Bamaniya's comment

Hello Marc,

Can we have any solution for this? Scanner not working in Sign APK. We are unable to test it after uploading it to the Play store. Need your feedback as soon as possible.

Marc
In reply to Steve's comment

Hello Steve, I understand this is a major issue for you. My problem is that I am unable to reproduce the error. I have tried to create a signed APK and it worked well (described it here https://github.com/flutter-clutter/flutter-simple-edge-detection/issues/15). What OS are you on? What version of OpenCV do you use? Does that also happen for iOS? You might want to answer these questions in the Github issue.

Chris von Wielligh
In reply to Sunny Bamaniya's comment

Hi, same issue. Works in debug mode on device and simulator but does not work in release on iOS or Android. Anyone experienced the same issue?

Michael Canin

Hi, This question may sound a little silly but I couldn’t find it.

I think when i changed cpp files in ios folder, changed for ios.

How can i edit cpp files for android. Need to open and edit files with .so files ?

Also this is awesome work, thanks.

Reply to Michael Canin

Marc
In reply to Michael Canin's comment

Hey Michael, no it’s not silly. It’s actually a little bit confusing, but the cpp files for both platforms reside in the ios folder. That’s because CocoaPods can only reference source code from directories at the same level of the .podspec file or below and we don’t want to duplicate these files for Android. So basically if you edit the cpp files, the change will affect both of the platforms.
Reply to Marc

Michael Canin
In reply to Marc's comment

thanks !
Reply to Michael Canin

Juan Terven

Hi Marc, great tutorial!

I want to be able to debug the results, I wonder how do you see the results of debug_squares? I see you return a Mat, but I can’t find anywhere how to display it.

Thanks!

Reply to Juan Terven

Marc
In reply to Juan Terven's comment

You would have to create a usage yourself. You might want to just override the given image with the ones with the debug output on it.
Reply to Marc

Mansi Bhatt

FAILURE: Build failed with an exception.

  • What went wrong: Execution failed for task ‘:simple_edge_detection:externalNativeBuildDebug’. > Build command failed. Error while executing process C:\Users\Andorid\AppData\Local\Android\Sdk\cmake\3.6.4111459\bin\cmake.exe with arguments {–build D:\Flutter Projects\flutter-simple-edge-detection\android.cxx\cmake\debug\armeabi-v7a –target native_edge_detection}

I’m getting this error

ninja: error: ‘../../../../src/main/jniLibs/armeabi-v7a/libopencv_java4.so’, needed by ‘D:/Flutter Projects/flutter-simple-edge-detection/example/build/simple_edge_detection/intermediates/cmake/debug/obj/armeabi-v7a/libnative_edge_detection.so’, missing and no known rule to make it

Reply to Mansi Bhatt

Thinh

Hi marc, This package is awesome. Thank you for sharing it.

I think I have problem with configuration. Please help me if you have some time. This package add 50MB to my current project. However when I build example app, the appbundle file is only 18.7MB. What I do is adding package in pubspec.yaml (same folder as example app): simple_edge_detection: path: ../flutter-simple-edge-detection

Is there anything else I should do? Thank you

Reply to Thinh

Abbass Sharara

How to crop image using the edge detection without converting image to binary image.
Reply to Abbass Sharara

Abbass Sharara
In reply to Abbass Sharara's comment

On android phones I want photo to be cropped without turning white and so please help
Reply to Abbass Sharara

Md. Nazimul Haque

Hello Marc, Thanks for the amazing tutorial! It’s really helpful. However, I’m in a different problem: is it possible to detect shapes (e.g., rounded rectangles and circles) in an image? Is it also possible to detect colour shades inside those shapes?

Your guidance will help me a lot.

Reply to Md. Nazimul Haque

Marc
In reply to Md. Nazimul Haque's comment

Hey Nazimul,

since we’re calling C++ code via ffi, everything that is supported by OpenCV is possible. You just need to find out the way it’s done in OpenCV (I guess SO can be a big help here) and then change the respective code in the C++ files. Sorry that I can help you in more detail. Good luck!

Reply to Marc

parisa

hi it can just do edge detection. how we can crop that ? for example this page can understand that it is card . how we can crop that?
Reply to parisa

vishnu

Can we use opencv for finding duplicate image in local storage if yes please tell how to do it
Reply to vishnu

Navaneeth B

I am getting below error , when use used your repository and ran i got this

/D:/flutter/.pub-cache/hosted/pub.dartlang.org/ffi-0.1.3/lib/src/utf8.dart:63:33: Error: The getter ‘addressOf’ isn’t defined for the class ‘Utf8’.

i am using flutter-sdk version 2.2.3 dart-sdk-version 2.13.4

thank u

Reply to Navaneeth B

Can

Hi marc, Thank you for the tutorial. This plugin works perfectly on ios but i couldn’t make it work on android.I am getting this error: “ninja: error: ‘../../../../src/main/jniLibs/armeabi-v7a/libopencv_java4.so’, needed by ‘/Users/cansavrun/Desktop/flutter-simple-edge-detection/example/build/simple_edge_detection/intermediates/cmake/debug/obj/armeabi-v7a/libnative_edge_detection.so’, missing and no known rule to make it.” What am I doing wrong?
Reply to Can

Marc
In reply to Can's comment

Hey Can. Sounds like you missed the step of copying the OpenCV Android dependencies. You might want to follow the instructions which I also documented in the corresponding Github repository
Reply to Marc

Can
In reply to Marc's comment

Thank you!
Reply to Can

EC

Hi Marc, This is a great tutorial.Thank you for sharing this with us. Do you still have a plan to upload this package as a flutter plugin?
Reply to EC

Jayaram Yakkala

Error output from Xcode build: ↳ 2021-08-25 00:27:33.887 xcodebuild[32273:370658] CFURLRequestSetHTTPCookieStorageAcceptPolicy_block_invoke: no longer implemented and should not be called ** BUILD FAILED **

Please help i am getting the error when building for IOS . copied the opencv2.framework in the IOS folder but still the same Xcode’s output: ↳ In file included from /Users/IrusuTechnologies/development/apps/flutter-simple-edge-detection/ios/Classes/image_processor.cpp:1: /Users/IrusuTechnologies/development/apps/flutter-simple-edge-detection/ios/Classes/image_processor.hpp:1:10: fatal error: ‘opencv2/opencv.hpp’ file not found #include

Reply to Jayaram Yakkala

jay ram

I am getting this error. Do you have any idea? A failure occurred while executing com.android.build.gradle.internal.tasks.MergeNativeLibsTask$MergeNativeLibsTaskWorkAction > 2 files found with path ’lib/arm64-v8a/libopencv_java4.so’ from inputs: - /Users/irusutechnolog/Flutter/Apps/apps/flutter-simple-edge-detection/example/build/simple_edge_detection/intermediates/merged_jni_libs/debug/out/arm64-v8a/libopencv_java4.so - /Users/irusutechnolog/Flutter/Apps/apps/flutter-simple-edge-detection/example/build/simple_edge_detection/intermediates/cxx/Debug/1e4u3b6s/obj/arm64-v8a/libopencv_java4.so
Reply to jay ram

gwinter
In reply to jay ram's comment

Hey jay ram. Im pretty late with this one, but in got the same problem than you. My solution was adding main.jniLibs.srcDirs = [] to the sourceSets node inside build.gradle
Reply to gwinter

sander

Hi Marc, Great tutorial! I was wondering., does the cropped image actually get saved as as file ? I have been trying to get the filepath for the cropped image to save in a database, but so far i wasnt able to find it. Also a bitt confused as how to prevent from going back to the cameraview after i press check button on the grey cropped image view? Hope my questions make sense, still pretty new to flutter.
Reply to sander

Marc
In reply to sander's comment

Hey sander, yes, the original file is being overridden by OpenCV in the c++ code. So the file path you put in EdgeDetector.processImage() is actually the path from the cropped image as well.
Reply to Marc

Kwarc

Hi marc, I am using these step to create my app. Everything is running smooth. However, I am stuck with an issue.

This works like a charm in newer android versions. However, I tried this is Android 6.0, sdk: 29. In this version it gives me this error:

ArgumentError (Invalid argument(s): Failed to load dynamic library 'libnative_edge_detection.so': dlopen failed: library "libnative_edge_detection.so" not found)

this is on the following line:

1  static DynamicLibrary _getDynamicLibrary() {
2    final DynamicLibrary nativeEdgeDetection = Platform.isAndroid
3        ? DynamicLibrary.open("libnative_edge_detection.so")

It works on newer Android versions though.

Reply to Kwarc

tuncay

Hello marc, thank you for the tutorial. I can run your project for android, but I’m having trouble with ios. I’m using an M1 chip imac. I clone your project and copy the opencv2.framework folder under the ios folder. I get an error when I want to run it using iphone 13 simulator. Can you help me?

Error message Launching lib/main.dart on iPhone 13 Pro in debug mode… lib/main.dart:1 Warning: Missing build name (CFBundleShortVersionString). Warning: Missing build number (CFBundleVersion). Action Required: You must set a build name and number in the pubspec.yaml file version field before submitting to the App Store. Xcode build done. 1,5s Failed to build iOS app Error output from Xcode build: ↳ — xcodebuild: WARNING: Using the first of multiple matching destinations: { platform:iOS Simulator, id:dvtdevice-DVTiOSDeviceSimulatorPlaceholder-iphonesimulator:placeholder, name:Any iOS Simulator Device } { platform:iOS Simulator, id:5255E742-67AA-4BFB-837A-A38B9676309D, OS:15.0, name:iPad (9th generation) } { platform:iOS Simulator, id:7BDE4520-CD96-4315-B897-412C71588EFB, OS:15.0, name:iPad Air (4th generation) } { platform:iOS Simulator, id:5148F0D7-0317-421D-8464-2A3034AFAFB3, OS:15.0, name:iPad Pro (9.7-inch) } { platform:iOS Simulator, id:9ACA452B-0714-40C6-9E50-AB976B1596ED, OS:15.0, name:iPad Pro (11-inch) (3rd generation) } { platform:iOS Simulator, id:34AE3264-8B00-484B-8AB8-719451E4F112, OS:15.0, name:iPad Pro (12.9-inch) (5th generation) } { platform:iOS Simulator, id:E104FF8B-F0B2-4016-AF22-3061F923BE9C, OS:15.0, name:iPad mini (6th generation) } { platform:iOS Simulator, id:04F71030-26A0-417D-8540-B44197868039, OS:15.0, name:iPhone 8 } { platform:iOS Simulator, id:43E049A2-76F0-4637-A0D3-7C9DE51CD97D, OS:15.0, name:iPhone 8 Plus } { platform:iOS Simulator, id:15397BCF-67A1-4862-9E46-7941CA43162F, OS:15.0, name:iPhone 11 } { platform:iOS Simulator, id:66EFE0EE-F4FB-44B9-970B-686643CF53EE, OS:15.0, name:iPhone 11 Pro } { platform:iOS Simulator, id:B0716258-F103-430E-B53E-001DB18D2D21, OS:15.0, name:iPhone 11 Pro Max } { platform:iOS Simulator, id:212B7E5E-719A-4756-9C29-9BCD8206052E, OS:15.0, name:iPhone 12 } { platform:iOS Simulator, id:36336845-1702-4F30-AF53-07BD49F15A37, OS:15.0, name:iPhone 12 Pro } { platform:iOS Simulator, id:8F004F67-6136-4AD4-B1A3-3C3499FC5ED5, OS:15.0, name:iPhone 12 Pro Max } { platform:iOS Simulator, id:641AF505-91B0-432B-9785-B2332AF73493, OS:15.0, name:iPhone 12 mini } { platform:iOS Simulator, id:213D7B91-DC98-4324-BC5C-79E228EDA46B, OS:15.0, name:iPhone 13 } { platform:iOS Simulator, id:A445C1EE-40E5-4E38-B957-4E5D589FC54E, OS:15.0, name:iPhone 13 Pro } { platform:iOS Simulator, id:86513067-E521-4FFE-9F77-7FD59D484983, OS:15.0, name:iPhone 13 Pro Max } { platform:iOS Simulator, id:A5603A42-9F3F-4811-A279-5CBF5569496C, OS:15.0, name:iPhone 13 mini } { platform:iOS Simulator, id:7CBE28FC-3ABE-4069-953F-52EB65802906, OS:15.0, name:iPhone SE (2nd generation) } { platform:iOS Simulator, id:3D1ECADF-EDD6-47CA-9437-228F95DC0382, OS:15.0, name:iPod touch (7th generation) } { platform:macOS, arch:arm64, variant:Designed for [iPad,iPhone], id:00008103-0015088E26A3001E } { platform:iOS, id:dvtdevice-DVTiPhonePlaceholder-iphoneos:placeholder, name:Any iOS Device } ** BUILD FAILED ** Xcode’s output: ↳ ld: in /Users/tuncay/Desktop/FlutterProjeleri/_Deneme/flutter-simple-edge-detection-master/ios/opencv2.framework/opencv2(median_blur.dispatch.o), building for iOS Simulator, but linking in object file built for iOS, for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) note: Using new build system note: Planning note: Build preparation complete note: Building targets in parallel Could not build the application for the simulator. Error launching application on iPhone 13 Pro. Exited (sigterm)

Reply to tuncay

Ranulfo Souza

Hi, I moved your same structure project (ios/classes, include/opencv2…, …/main/jniLibs/, CMakeLists.txt, …etc) to my own project and i make this call in my main initstate:

final DynamicLibrary nativeEdgeDetection = Platform.isAndroid ? DynamicLibrary.open(“libnative_edge_detection.so”) : DynamicLibrary.process();

final processImage = nativeEdgeDetection
    .lookup&lt;NativeFunction&gt;("process_image")
    .asFunction();

but when i run this error ocorrer:

Failed to load dynamic library ’libnative_edge_detection.so’: dlopen failed: cannot locate symbol “_ZN2cv3MatD1Ev” referenced by “/data/app/~~C5IOTQHzpubDvBFd5WszrQ==/br.gov.rj.teste.teste-_tcsGMGP01lt5fzCB1uweg==/lib/arm64/libnative_edge_detection.so”…

the “libnative_edge_detection.so” exists in several folders of my project:

ranulfosouza@MacBook-Air-de-Ranulfo identidadedigital_1.0.6 % find ./ -name libnative_edge_detection.so .//build/app/intermediates/cmake/debug/obj/armeabi-v7a/libnative_edge_detection.so .//build/app/intermediates/cmake/debug/obj/x86/libnative_edge_detection.so .//build/app/intermediates/cmake/debug/obj/arm64-v8a/libnative_edge_detection.so .//build/app/intermediates/cmake/debug/obj/x86_64/libnative_edge_detection.so .//build/app/intermediates/merged_native_libs/debug/out/lib/armeabi-v7a/libnative_edge_detection.so .//build/app/intermediates/merged_native_libs/debug/out/lib/x86/libnative_edge_detection.so .//build/app/intermediates/merged_native_libs/debug/out/lib/arm64-v8a/libnative_edge_detection.so .//build/app/intermediates/merged_native_libs/debug/out/lib/x86_64/libnative_edge_detection.so .//build/app/intermediates/stripped_native_libs/debug/out/lib/armeabi-v7a/libnative_edge_detection.so .//build/app/intermediates/stripped_native_libs/debug/out/lib/x86/libnative_edge_detection.so .//build/app/intermediates/stripped_native_libs/debug/out/lib/arm64-v8a/libnative_edge_detection.so .//build/app/intermediates/stripped_native_libs/debug/out/lib/x86_64/libnative_edge_detection.so

I really necessary to create a project by this command ? flutter create –org dev.flutterclutter –template=plugin –platforms=android,ios simple_edge_detection

thank’s !

Reply to Ranulfo Souza

Josh

Thank you for the excellent tutorial!

I ran into an issue: More than one file was found with OS independent path 'lib/x86_64/libopencv_java4.so'.

Reading here https://developer.android.com/studio/projects/gradle-external-native-builds#jniLibs, a simple solution is to rename the ‘jniLibs’ folder to something else (e.g. ‘cmakeLibs’).

Reply to Josh

k-yone
In reply to Josh's comment

Thanks Josh! I ran in the same error and it worked for me, too.
Reply to k-yone

el mehdi tonzar
In reply to Josh's comment

it’s not working for mee !§
Reply to el mehdi tonzar

Ankush Das

Your tutorial is very awesome but how I crop the detected image ??
Reply to Ankush Das

Marc
In reply to Ankush Das's comment

Hey Ankush,

cropping is actually already done in the EdgeDetector.processImage() which then calls ImageProcessor::crop_and_transform. If you clone it from here, this should work fine already.

Reply to Marc

Robson Reis

Fist of all, I would like to congratulate for the amazing work and documentation. I woul like to know if this as evoluted in the last years. I would like to have some features like for instance, autodetec and crop edges. Like some scan mobile apps (CamScanner, Genius Scanner, ScanBOT) do after takes the photo of a document. It detects the edge (retangle) than crop the form removing the black borders. The user does not need to press the shutter button. Is it feasible to do this?
Reply to Robson Reis

Marc
In reply to Robson Reis's comment

Hello Robson Reis,

I have developed this tool not entirely in my free time, but partly in the context of my job as I realized there is no working edge detection library for Flutter out there. In fact, my employer is unhappy with me sharing all the progress with the open source world which unfortunately led me to stop publishing any update. I am very sorry for that. You guys are very welcome to work on this. Everything you’ve mentioned is possible. In fact, I have already implemented a version, which has a video preview running and automatically detects edges every 300 ms with a good performance.

Reply to Marc

Abhishek Chauhan

Hi Marc! First of all THANK YOU for making such a good tutorial. While I was setting up your plugin i encountered the following message:

The plugin flutter_plugin_android_lifecycle uses a deprecated version of the Android embedding. To avoid unexpected runtime failures, or future build failures, try to see if this plugin supports the Android V2 embedding. Otherwise, consider removing it since a future release of Flutter will remove these deprecated APIs.

If you are plugin author, take a look at the docs for migrating the plugin to the V2 embedding: https://flutter.dev/go/android-plugin-migration.

It says that in future the flutter team will be removing android embedding V1, so it would be greatly appreciated if you could change the android embedding( I am just a beginner and don’t know how to let go of this msg). Thank you for your time!

Reply to Abhishek Chauhan

Comment this 🤌

You are replying to 's commentRemove reference