Implementing edge detection in Flutter

Edge detection is a very common task in various app types. I had assumed that there must be an easy solution in Flutter to perform that common task. However, at the time of writing (2020-09-07), this is not the case. The solutions I found were:

  • edge_detection package: sounds promising from the description, but has at least one major flaw: it doesn’t provide an API that expects an image and returns the detected edges. Instead, it leads the user to a native screen in which he can perform predefined actions. This makes it completely useless for most of the use cases at least in the business context because it leaves the developer unable to define the layout and design of the screen. Apps are mostly branded and require individual screen designs
  • opencv package: the idea of having OpenCV bindings available in the Flutter context is appealing. However, iOS is unsupported which also makes it unusable for professional apps. Also, one would have to translate existing examples on the web written in C++, Java or Python into the dart commands that also have differing data types. Another problem is that not every command seems to be translated
  • Using platform channel to call native code. This would lead to a lot of code duplication between the Android and iOS part

This tutorial describes how to implement an app that lets you use the phone’s camera to take a picture or choose one from the gallery. It then displays the detected edges on this picture. The app will be fully functional on iOS and Android and will only have one code base.

Let’s describe the properties and features of the app we want to implement

  • A screen with a camera preview that has two buttons: take a photo and choose in image from the gallery
  • A screen that appears when an image was taken or chosen that displays the image and renders the detected edges on top
  • There is a button that lets the user return from the edge screen to the main screen
  • The UI must not be blocked during edge detection. Instead, the image is shown already and the edges appear once the process is finished

Concept

Because we want the detection to happen in native C++ in order to have a single code base, we need to make use of the Dart foreign function interface or dart:ffi.

Flutter edge detection call chain
The call chain

The call chain happens as follows: in our main widget, we give the user the ability to choose a photo. The path of the photo is being forwarded to our C/C++ Bridge. This is the last class that actually contains Dart code. It converts the given string (image path) to a format our C-Code understands. It also loads the shared library we will create from our C++ source files. There the image is being loaded from file system to check if exists. If it does, the Mat (native data type of OpenCV) is passed to the respective OpenCV functions that perform the actual edge detection. The list of list of Point is then passed back to the entry point of our native code, which converts it to a DetectionResult which is essentially a C++ struct with four doubles (representing the relative position of the four edges). The C/C++ bridge puts that information into a Dart type (EdgeDetectionResult) and passed it back to our widget where this piece of information is used to render the edges on top of our image.

Implementation

We are going to split this part of the tutorial in two sub parts: project setup, native implementation / ffi, Flutter widget implementation. Let’s start with the project setup.

Project setup

We want this to be a package to be able to integrate it in any Flutter project we want. That’s why we start by generating the project as follows:

flutter create --org dev.flutterclutter --template=plugin --platforms=android,ios simple_edge_detection

More information about this can be found here. We want to develop the plugin for both iOS and android, that’s a necessary argument.

Download OpenCV

Flutter download OpenCV
We need it for Android and iOS

We then head over to the OpenCV releases and download the iOS pack which comes in form of a framework and the Android code.

Next, we need to extract the archives and copy the contents as follows into our project directory:

cp -R sdk/native/jni/include project_root
cp sdk/native/libs/* project_root/android/src/main/jniLibs/*

Where project_root is the directory called simple_edge_detection which resides at the location where we executed the flutter create command.

Why exactly these directories, you might ask yourself. When librarie are placed in the jniLibs folder, they are automatically included during build. It’s also possible to override this behavior by editing the build.gradle.

For iOS, we also need to copy the sources:

cp -R opencv2.framework project_root/ios

Our project directory now looks like this:

native_opencv
  - android
    - src
      - main
        - jniLibs
          - arm64-v8a
            - libopencv_java4.so
          - armeabi-v7a
            - libopencv_java4.so
          - x86
            - libopencv_java4.so
          - x86_64
            - libopencv_java4.so
  - include
    - opencv2
      - ...

Build setup Android

No we need to tell Gradle to build the app in the way that the C++ code we are going to write is accessible from the Dart context as a library.

android {
    compileSdkVersion 28

    sourceSets {
        main.java.srcDirs += 'src/main/kotlin'
    }
    defaultConfig {
        minSdkVersion 16
    }
    lintOptions {
        disable 'InvalidPackage'
    }
    externalNativeBuild {
        cmake {
            path "CMakeLists.txt"
        }
    }
    defaultConfig {
        externalNativeBuild {
            cmake {
                cppFlags '-frtti -fexceptions -std=c++11'
                arguments "-DANDROID_STL=c++_shared"
            }
        }
    }
}

The most import part is within externalNativeBuild because it tells Gradle where the CMakeLists is located. This file has build instructions that link our code with OpenCV.

The cmake flags enable Run-Time Type Information, exceptions and tells it to compile the program with C++11. The argument -DANDROID_STL=c++_shared is responsible for enabling the usage of shared C++ runtime in shared libraries.

Now we need to create the CMakeLists.txt we have just referenced.

cmake_minimum_required(VERSION 3.6.0)
include_directories(../include)
add_library(lib_opencv SHARED IMPORTED)
set_target_properties(lib_opencv PROPERTIES IMPORTED_LOCATION ${CMAKE_CURRENT_SOURCE_DIR}/src/main/jniLibs/${ANDROID_ABI}/libopencv_java4.so)
set(EDGE_DETECTION_DIR "../ios/Classes")
set(SOURCES
    ${EDGE_DETECTION_DIR}/native_edge_detection.cpp
    ${EDGE_DETECTION_DIR}/edge_detector.cpp
)
add_library(native_edge_detection SHARED ${SOURCES})
target_link_libraries(native_edge_detection lib_opencv)

We mainly link our sources with the OpenCV library here and make it a library that can be called from our Dart code.

You might stumble across set(EDGE_DETECTION_DIR "../ios/Classes") and ask yourself why the ios directory in the cmake file for the Android targets. We need to put our C/C++ sources there because we’ll be using CocoaPods (the packet manager of the iOS App ecosystem). CocoaPods can only reference source code from directories at the same level of the .podspec file or below. Looks a little bit hacky because the code base is used for both platforms but the Android build system is a little bit more tolerant in this regard and we can reference the sources within the ios folder from here.

There are two sources: native_edge_detection.cpp and edge_detector.cpp. These are the names our C++ sources will have. native_edge_detection.cpp is going to be our entry point from FFI and edge_detector.cpp will contain the actual logic of the edge detection and call OpenCV.

Build setup iOS

Now that we are finished with setting up the build system for Android, let’s continue with iOS before we actually write the native code. This requires a few changes in the .simple_edge_detection.podspec file.

#
# To learn more about a Podspec see http://guides.cocoapods.org/syntax/podspec.html.
# Run `pod lib lint simple_edge_detection.podspec' to validate before publishing.
#
Pod::Spec.new do |s|
  s.name             = 'simple_edge_detection'
  s.version          = '0.0.1'
  s.summary          = 'A new flutter plugin project.'
  s.description      = <<-DESC
A new flutter plugin project.
                       DESC
  s.homepage         = 'http://example.com'
  s.license          = { :file => '../LICENSE' }
  s.author           = { 'Your Company' => 'email@example.com' }
  s.source           = { :path => '.' }
  s.source_files = 'Classes/**/*.{swift,c,m,h,mm,cpp,plist}'
  s.dependency 'Flutter'
  s.platform = :ios, '8.0'

  # Flutter.framework does not contain a i386 slice. Only x86_64 simulators are supported.
  s.pod_target_xcconfig = { 'DEFINES_MODULE' => 'YES', 'VALID_ARCHS[sdk=iphonesimulator*]' => 'x86_64' }
  s.swift_version = '5.0'

  s.preserve_paths = 'opencv2.framework'
  s.xcconfig = { 'OTHER_LDFLAGS' => '-framework opencv2' }
  s.vendored_frameworks = 'opencv2.framework'
  s.frameworks = 'AVFoundation'
  s.library = 'c++'
end

This is what your .podspec file should look like. Everything below s.swift_version='5.0' was added by us.

We tell CocoaPods not to remove opencv in the build process by using s.preserve_paths. We then instruct the linker to include OpenCV2 which it can find because we placed the framework in the ios folder earlier. OpenCV2 needs AVFoundation, which is the camera abstraction on iOS. That’s because the video camera interface is basically just a wrapper around that.

One important notice: the line s.source_files = 'Classes/**/*/.{swift,c,m,h,mm,cpp,plist}' was changed to s.source_files = 'Classes/**/*/.{swift,c,m,h,mm,cpp,plist}'. That’s because otherwise the header files will be imported twice (as there is no file filter) which prevents the build process to work properly.

Implementing native edge detection using C++

Now we are done with the setup. We proceed by implementing the actual native code both of the platforms will be sharing.

We start with the native implementation by creating the entry point called native_edge_detection.

struct Coordinate
{
    double x;
    double y;
};

struct DetectionResult
{
    Coordinate* topLeft;
    Coordinate* topRight;
    Coordinate* bottomLeft;
    Coordinate* bottomRight;
};

extern "C"
struct DetectionResult *detect_edges(char *str);

We define two structs that are used to represent the result of our process. Coordinate is just a Point with relative X and Y coordinates. DetectionResults has four of these types as properties. They will represent the four corners of our edge detection result. Our class will only have one public method called detect_edges that receives a string representing the path to our image file and returns a DetectionResult.

The FFI library is able call C symbols but we are writing C++ code. That’s why we mark these symbols extern C.

#include "native_edge_detection.hpp"
#include "edge_detector.hpp"
#include <stdlib.h>
#include <opencv2/opencv.hpp>

extern "C" __attribute__((visibility("default"))) __attribute__((used))
struct Coordinate *create_coordinate(double x, double y)
{
    struct Coordinate *coordinate = (struct Coordinate *)malloc(sizeof(struct Coordinate));
    coordinate->x = x;
    coordinate->y = y;
    return coordinate;
}

extern "C" __attribute__((visibility("default"))) __attribute__((used))
struct DetectionResult *create_detection_result(Coordinate *topLeft, Coordinate *topRight, Coordinate *bottomLeft, Coordinate *bottomRight)
{
    struct DetectionResult *detectionResult = (struct DetectionResult *)malloc(sizeof(struct DetectionResult));
    detectionResult->topLeft = topLeft;
    detectionResult->topRight = topRight;
    detectionResult->bottomLeft = bottomLeft;
    detectionResult->bottomRight = bottomRight;
    return detectionResult;
}

extern "C" __attribute__((visibility("default"))) __attribute__((used))
struct DetectionResult *detect_edges(char *str) {
    struct DetectionResult *coordinate = (struct DetectionResult *)malloc(sizeof(struct DetectionResult));
    cv::Mat mat = cv::imread(str);

    if (mat.size().width == 0 || mat.size().height == 0) {
        return create_detection_result(
            create_coordinate(0, 0),
            create_coordinate(1, 0),
            create_coordinate(0, 1),
            create_coordinate(1, 1)
        );
    }

    vector<cv::Point> points = EdgeDetector::detect_edges(mat);

    return create_detection_result(
        create_coordinate((double)points[0].x / mat.size().width, (double)points[0].y / mat.size().height),
        create_coordinate((double)points[1].x / mat.size().width, (double)points[1].y / mat.size().height),
        create_coordinate((double)points[2].x / mat.size().width, (double)points[2].y / mat.size().height),
        create_coordinate((double)points[3].x / mat.size().width, (double)points[3].y / mat.size().height)
    );
}

The strange looking visibility attributes prevent the linker from discarding the symbols during link-time optimization which make them callable from Dart.

The first thing we do is reading the image using cv::imread() if the size and with are 0 meaning there is no valid image at this path, we return a DetectionResult that spans over the whole image dimensions (remember, the coordinates are relative).

Otherwise, we forward the image as a Mat to the heart of our detection called EdgeDetector. The resulting points are absolute so we divide them by width and height of our image to ensure they are relative. That will make our life easier later on in the Flutter widget because if the displayed image is smaller than the original size of the image (which can happen if at least one side exceeds the dimensions of the screen), we don’t need to calculate the relative scaling factor.

Now let’s have a look at our EdgeDetector. I will not explain every line of code in detail because this is still a Flutter tutorial and the C++ algorithm to detect the edges is only exemplary.

#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;


class EdgeDetector {
    public:
    static vector<cv::Point> detect_edges( Mat& image);
    static Mat debug_squares( Mat image );
    
    private:
    static double get_cosine_angle_between_vectors( cv::Point pt1, cv::Point pt2, cv::Point pt0 );
    static vector<vector<cv::Point> > find_squares(Mat& image);
    static float get_width(vector<cv::Point>& square);
    static float get_height(vector<cv::Point>& square);
};

The only public methods we provide are detect_edges which is the method we have just called in the other class. I used debug_squares during development to debug the output when I had strange results. This method paints the detected edges onto the original image and returns it. If you experience unexpected results like too small or too big rectangles, you might use this method in order to find out what’s going on.

#include "edge_detector.hpp"

#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/types_c.h>

using namespace cv;
using namespace std;

// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
double EdgeDetector::get_cosine_angle_between_vectors(cv::Point pt1, cv::Point pt2, cv::Point pt0)
{
    double dx1 = pt1.x - pt0.x;
    double dy1 = pt1.y - pt0.y;
    double dx2 = pt2.x - pt0.x;
    double dy2 = pt2.y - pt0.y;
    return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}

vector<cv::Point> image_to_vector(Mat& image)
{
    int imageWidth = image.size().width;
    int imageHeight = image.size().height;

    return {
        cv::Point(0, 0),
        cv::Point(imageWidth, 0),
        cv::Point(0, imageHeight),
        cv::Point(imageWidth, imageHeight)
    };
}

vector<cv::Point> EdgeDetector::detect_edges(Mat& image)
{
    vector<vector<cv::Point>> squares = find_squares(image);
    vector<cv::Point>* biggestSquare = NULL;

    // Sort so that the points are ordered clockwise

    struct sortY {
        bool operator() (cv::Point pt1, cv::Point pt2) { return (pt1.y < pt2.y);}
    } orderRectangleY;
    struct sortX {
        bool operator() (cv::Point pt1, cv::Point pt2) { return (pt1.x < pt2.x);}
    } orderRectangleX;

    for (int i = 0; i < squares.size(); i++) {
        vector<cv::Point>* currentSquare = &squares[i];

        std::sort(currentSquare->begin(),currentSquare->end(), orderRectangleY);
        std::sort(currentSquare->begin(),currentSquare->begin()+2, orderRectangleX);
        std::sort(currentSquare->begin()+2,currentSquare->end(), orderRectangleX);

        float currentSquareWidth = get_width(*currentSquare);
        float currentSquareHeight = get_height(*currentSquare);

        if (currentSquareWidth < image.size().width / 5 || currentSquareHeight < image.size().height / 5) {
            continue;
        }

        if (currentSquareWidth > image.size().width * 0.99 || currentSquareHeight > image.size().height * 0.99) {
            continue;
        }

        if (biggestSquare == NULL) {
            biggestSquare = currentSquare;
            continue;
        }

        float biggestSquareWidth = get_width(*biggestSquare);
        float biggestSquareHeight = get_height(*biggestSquare);

        if (currentSquareWidth * currentSquareHeight >= biggestSquareWidth * biggestSquareHeight) {
            biggestSquare = currentSquare;
        }

    }

    if (biggestSquare == NULL) {
        return image_to_vector(image);
    }

    std::sort(biggestSquare->begin(),biggestSquare->end(), orderRectangleY);
    std::sort(biggestSquare->begin(),biggestSquare->begin()+2, orderRectangleX);
    std::sort(biggestSquare->begin()+2,biggestSquare->end(), orderRectangleX);

    return *biggestSquare;
}

float EdgeDetector::get_height(vector<cv::Point>& square) {
    float upperLeftToLowerRight = square[3].y - square[0].y;
    float upperRightToLowerLeft = square[1].y - square[2].y;

    return max(upperLeftToLowerRight, upperRightToLowerLeft);
}

float EdgeDetector::get_width(vector<cv::Point>& square) {
    float upperLeftToLowerRight = square[3].x - square[0].x;
    float upperRightToLowerLeft = square[1].x - square[2].x;

    return max(upperLeftToLowerRight, upperRightToLowerLeft);
}

cv::Mat EdgeDetector::debug_squares( cv::Mat image )
{
    vector<vector<cv::Point> > squares = find_squares(image);

    for (const auto & square : squares) {
        // draw rotated rect
        cv::RotatedRect minRect = minAreaRect(cv::Mat(square));
        cv::Point2f rect_points[4];
        minRect.points( rect_points );
        for ( int j = 0; j < 4; j++ ) {
            cv::line( image, rect_points[j], rect_points[(j+1)%4], cv::Scalar(0,0,255), 1, 8 ); // blue
        }
    }

    return image;
}

vector<vector<cv::Point> > EdgeDetector::find_squares(Mat& image)
{
    vector<int> usedThresholdLevel;
    vector<vector<Point> > squares;

    Mat gray0(image.size(), CV_8U), gray;

    cvtColor(image , gray, COLOR_BGR2GRAY);
    medianBlur(gray, gray, 3);      // blur will enhance edge detection
    vector<vector<cv::Point> > contours;

    int thresholdLevels[] = {10, 30, 50, 70};
    for(int thresholdLevel : thresholdLevels) {
        Canny(gray, gray0, thresholdLevel, thresholdLevel*3, 3); // max thres: 100  // *3 => recommended setting

        // Dilate helps to remove potential holes between edge segments
        dilate(gray0, gray0, Mat(), Point(-1, -1));

        // Find contours and store them in a list
        findContours(gray0, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);

        // Test contours
        vector<Point> approx;
        for (const auto & contour : contours) {
            // approximate contour with accuracy proportional
            // to the contour perimeter
            approxPolyDP(Mat(contour), approx, arcLength(Mat(contour), true) * 0.02, true);

            // Note: absolute value of an area is used because
            // area may be positive or negative - in accordance with the
            // contour orientation
            if (approx.size() == 4 && fabs(contourArea(Mat(approx))) > 1000 &&
                isContourConvex(Mat(approx))) {
                double maxCosine = 0;

                for (int j = 2; j < 5; j++) {
                    double cosine = fabs(get_cosine_angle_between_vectors(approx[j % 4], approx[j - 2], approx[j - 1]));
                    maxCosine = MAX(maxCosine, cosine);
                }

                if (maxCosine < 0.3) {
                    squares.push_back(approx);
                    usedThresholdLevel.push_back(thresholdLevel);
                }
            }
        }
    }

    return squares;
}

This is a lot of code, but the relevant part is basically taken from Stackoverflow. I enhanced it by ruling out too small detection results and ordering the rectangles so that their points are always returned in a clockwise matter starting from the top left point.

We store the above classes as edge_detector.hpp and edge_detector.cpp respectively.

There’s only one thing left to do before we are done with the package and can continue with actually calling our edge detection from a Flutter widget: we need to write the class that makes use of the ffi package to call the library we have just implemented. We call this bridge class edge_detection.dart, put it in the lib folder as the only file of our plugin and make it look like this:

import 'dart:async';
import 'dart:ffi';
import 'dart:io';
import 'dart:ui';
import 'package:ffi/ffi.dart';
import 'package:flutter/material.dart';


class Coordinate extends Struct {
  @Double()
  double x;

  @Double()
  double y;

  factory Coordinate.allocate(double x, double y) =>
      allocate<Coordinate>().ref
        ..x = x
        ..y = y;
}

class NativeDetectionResult extends Struct {
  Pointer<Coordinate> topLeft;
  Pointer<Coordinate> topRight;
  Pointer<Coordinate> bottomLeft;
  Pointer<Coordinate> bottomRight;

  factory NativeDetectionResult.allocate(
      Pointer<Coordinate> topLeft,
      Pointer<Coordinate> topRight,
      Pointer<Coordinate> bottomLeft,
      Pointer<Coordinate> bottomRight) =>
      allocate<NativeDetectionResult>().ref
        ..topLeft = topLeft
        ..topRight = topRight
        ..bottomLeft = bottomLeft
        ..bottomRight = bottomRight;
}

class EdgeDetectionResult {
  EdgeDetectionResult({
    @required this.topLeft,
    @required this.topRight,
    @required this.bottomLeft,
    @required this.bottomRight,
  });

  Offset topLeft;
  Offset topRight;
  Offset bottomLeft;
  Offset bottomRight;
}

typedef DetectEdgesFunction = Pointer<NativeDetectionResult> Function(
  Pointer<Utf8> x
);

class EdgeDetection {
  static Future<EdgeDetectionResult> detectEdges(String path) async {
    DynamicLibrary nativeEdgeDetection = _getDynamicLibrary();

    final detectEdges = nativeEdgeDetection
        .lookup<NativeFunction<DetectEdgesFunction>>("detect_edges")
        .asFunction<DetectEdgesFunction>();

    NativeDetectionResult detectionResult = detectEdges(Utf8.toUtf8(path)).ref;

    return EdgeDetectionResult(
        topLeft: Offset(
            detectionResult.topLeft.ref.x, detectionResult.topLeft.ref.y
        ),
        topRight: Offset(
            detectionResult.topRight.ref.x, detectionResult.topRight.ref.y
        ),
        bottomLeft: Offset(
            detectionResult.bottomLeft.ref.x, detectionResult.bottomLeft.ref.y
        ),
        bottomRight: Offset(
            detectionResult.bottomRight.ref.x, detectionResult.bottomRight.ref.y
        )
    );
  }

  static DynamicLibrary _getDynamicLibrary() {
    final DynamicLibrary nativeEdgeDetection = Platform.isAndroid
        ? DynamicLibrary.open("libnative_edge_detection.so")
        : DynamicLibrary.process();
    return nativeEdgeDetection;
  }
}

The structs we defined in the C/C++ code also need to be defined here because we have to determine the return value of the method we are calling. That’s why we create classes that extend the Struct class. If you want to know more about how to exchange data via ffi, have a look at this official example.

We also create a class called EdgeDetectionResult that holds essentially the same information as our DetectionResult class but with well-known data types of the Flutter world. We represent the points using the Offset class.

Because C does not know strings, we need to use a char pointer (Pointer<Utf8>) as the input parameter for our native script. We use DynamicLibrary to call our native library. This is always the name of our entry class (native_edge_detection), prefixed by lib and with so as file ending. For more information on how to call native libraries have a look at the official docs.

We used the ffi package in the above class so we need to import that package into our project by editing the pubspec.yml

dependencies:
  flutter:
    sdk: flutter
  ffi: ^0.1.3

Implementing the Flutter widgets

Now we are done with all the prerequisites. We implemented a package (plugin) we can use in any Flutter project to detect edges on a given image. Let’s use that to implement the goal we described at the beginning of this article.

For that, we create a new project edge_detection_sample that resides in the same directory as our package.

In the pubspec.yml, we import that local package like this:

dependencies:
  flutter:
    sdk: flutter
  camera: ^0.5.8+5
  path_provider: ^1.6.14
  image_picker: ^0.6.7+7

  simple_edge_detection:
    path: ../simple_edge_detection/

As you can see, we also need the camera, path_provider and image_picker. That’s because we we want to let the app detect the edges of either the camera or an image from the gallery.

Let’s start with a widget called Scan. This is the place where the user should be able to scan e. g. a sheet of paper either from the camera or from the gallery.

class Scan extends StatefulWidget {
  @override
  _ScanState createState() => _ScanState();
}

class _ScanState extends State<Scan> {
  CameraController controller;
  List<CameraDescription> cameras;
  String imagePath;
  EdgeDetectionResult edgeDetectionResult;

  @override
  void initState() {
    super.initState();
    checkForCameras().then((value) {
      _initializeController();
    });
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      body: Stack(
        children: <Widget>[
          _getMainWidget(),
          _getBottomBar(),
        ],
      ),
    );
  }

  Widget _getMainWidget() {
    if (imagePath == null && edgeDetectionResult == null) {
      return CameraView(
        controller: controller
      );
    }

    return EdgeDetectionPreview(
      imagePath: imagePath,
      edgeDetectionResult: edgeDetectionResult,
    );
  }

  Future<void> checkForCameras() async {
    cameras = await availableCameras();
  }

  void _initializeController() {
    if (cameras.length == 0) {
      log('No cameras detected');
      return;
    }

    controller = CameraController(
        cameras[0],
        ResolutionPreset.max,
        enableAudio: false
    );
    controller.initialize().then((_) {
      if (!mounted) {
        return;
      }
      setState(() {});
    });
  }

  @override
  void dispose() {
    controller?.dispose();
    super.dispose();
  }

  Widget _getButtonRow() {
    if (imagePath != null) {
      return Align(
        alignment: Alignment.bottomCenter,
        child: FloatingActionButton(
          foregroundColor: Colors.white,
          child: Icon(Icons.arrow_back),
          onPressed: () {
            setState(() {
              edgeDetectionResult = null;
              imagePath = null;
            });
          },
        ),
      );
    }

    return Row(
      mainAxisAlignment: MainAxisAlignment.center,
      children: [
        FloatingActionButton(
          foregroundColor: Colors.white,
          child: Icon(Icons.camera_alt),
          onPressed: onTakePictureButtonPressed,
        ),
        SizedBox(width: 16),
        FloatingActionButton(
          foregroundColor: Colors.white,
          child: Icon(Icons.image),
          onPressed: _onGalleryButtonPressed,
        ),
      ]
    );
  }

  Padding _getBottomBar() {
    return Padding(
      padding: EdgeInsets.only(bottom: 32),
      child: Align(
        alignment: Alignment.bottomCenter,
        child: _getButtonRow()
      )
    );
  }
}

We make it a stateful widget because we have certain variables here that can change during the lifetime of this widget that should be managed by the widget itself:

  • controller: The CameraController – this is necessary to take a picture when a button is pressed
  • cameras: The detected cameras on this device. Is being checked at the beginning and then used to initialize the controller
  • imagePath: The path leading to the current image
  • edgeDetectionResult: The detection result of the current image

The first thing during state initialization is checking for cameras. If none were found, nothing happens. Otherwise, the camera controller is being initialized with the first found camera which is usually the back facing camera of a smartphone.

Now the root widget inside of the Scaffold is a Stack. We want the camera preview to fill the whole screen if possible and then place the bottom bar with its buttons on top.

_getMainWidget() displays either the camera view when there is no imagePath and no edgeDetectionResult which means no picture has been taken yet or the edge detection preview meaning the image with the detected edges painted on top when both of the variables are set.

The CameraView widget and the EdgeDetectionPreview are yet only placeholders. Let’s implement them.

import 'package:camera/camera.dart';
import 'package:flutter/material.dart';

class CameraView extends StatelessWidget {
  CameraView({
    this.controller
  });

  final CameraController controller;

  @override
  Widget build(BuildContext context) {
    return _getCameraPreview();
  }
  
  Widget _getCameraPreview() {
    if (controller == null || !controller.value.isInitialized) {
      return Container();
    }

    return Center(
      child: AspectRatio(
        aspectRatio: controller.value.aspectRatio,
        child: CameraPreview(controller)
      )
    );
  }
}

The display logic of the CameraView is fairly simple as it just uses the CameraController of the parent widget to display the CameraPreview. We use an AspectRatio widget in order to have it the same ratio as the stream of images coming from the camera.

The EdgeDetectionPreview has a little bit more code:

import 'dart:async';
import 'dart:io';
import 'dart:math';
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'package:simple_edge_detection/edge_detection.dart';
import 'package:flutter/material.dart';

class EdgeDetectionPreview extends StatefulWidget {
  EdgeDetectionPreview({
    this.imagePath,
    this.edgeDetectionResult
  });

  final String imagePath;
  final EdgeDetectionResult edgeDetectionResult;

  @override
  _EdgeDetectionPreviewState createState() => _EdgeDetectionPreviewState();
}

class _EdgeDetectionPreviewState extends State<EdgeDetectionPreview> {
  GlobalKey imageWidgetKey = GlobalKey();

  @override
  Widget build(BuildContext mainContext) {
    return Center(
      child: Stack(
        fit: StackFit.expand,
        children: <Widget>[
          Center(
            child: Text('Loading ...')
          ),
          Image.file(
            File(widget.imagePath),
            fit: BoxFit.contain,
            key: imageWidgetKey
          ),
          FutureBuilder<ui.Image>(
            future: loadUiImage(widget.imagePath),
            builder: (BuildContext context, AsyncSnapshot<ui.Image> snapshot) {
              return _getEdgePaint(snapshot, context);
            }
          ),
        ],
      ),
    );
  }

  Widget _getEdgePaint(AsyncSnapshot<ui.Image> imageSnapshot, BuildContext context) {
    if (imageSnapshot.connectionState == ConnectionState.waiting)
      return Container();

    if (imageSnapshot.hasError)
      return Text('Error: ${imageSnapshot.error}');

    if (widget.edgeDetectionResult == null)
      return Container();

    final keyContext = imageWidgetKey.currentContext;

    if (keyContext == null) {
      return Container();
    }

    final box = keyContext.findRenderObject() as RenderBox;

    return CustomPaint(
        size: Size(box.size.width, box.size.height),
        painter: EdgePainter(
          topLeft: widget.edgeDetectionResult.topLeft,
          topRight: widget.edgeDetectionResult.topRight,
          bottomLeft: widget.edgeDetectionResult.bottomLeft,
          bottomRight: widget.edgeDetectionResult.bottomRight,
          image: imageSnapshot.data,
          color: Theme.of(context).accentColor
        )
    );
  }

  Future<ui.Image> loadUiImage(String imageAssetPath) async {
    final Uint8List data = await File(imageAssetPath).readAsBytes();
    final Completer<ui.Image> completer = Completer();
    ui.decodeImageFromList(Uint8List.view(data.buffer), (ui.Image image) {
      return completer.complete(image);
    });
    return completer.future;
  }
}

Instead of directly painting the image at the bottom layer of the stack and some edges on top, we do something different: we load the image asynchronously and then use a FutureBuilder to display the image. We do that because we need the dimensions of the scaled image as well as the original dimensions of the image. This way, when we draw the edges on top, we can draw it according to the current scale.

The future which the FutureBuilder is based on, is the Completer of decoding the image that as been read from the given path.

Another advantage is that we can show a preloader in every phase: before the image in the stack is displayed, we show a text saying “Loading …”. While the asynchronous process of loading the image is happening, we show the image. And finally, we paint the edges on top. That does not leave the user with a blank screen until everything is ready.

It’s important that we choose fit: BoxFit.contain to display the image. This way, the image will always fill width or height depending on its aspect ratio. Otherwise, smaller image may appear tiny in the center. In combination with fit: StackFit.expand this leads to the image filling all the available space.

Another significant part is using GlobalKey imageWidgetKey. This way we can reuse the size of the actual rendered Image widget then painting the edges on top. For more information on how to get the size of a widget in the context of another widget, please refer to the official docs.

Now let’s have a look at the CustomPainter being responsible for drawing the image and the edges:

class EdgePainter extends CustomPainter {
  EdgePainter({
    this.topLeft,
    this.topRight,
    this.bottomLeft,
    this.bottomRight,
    this.image,
    this.color
  });

  Offset topLeft;
  Offset topRight;
  Offset bottomLeft;
  Offset bottomRight;

  ui.Image image;
  Color color;

  @override
  void paint(Canvas canvas, Size size) {
    double top = 0.0;
    double left = 0.0;


    double renderedImageHeight = size.height;
    double renderedImageWidth = size.width;

    double widthFactor = size.width / image.width;
    double heightFactor = size.height / image.height;
    double sizeFactor = min(widthFactor, heightFactor);

    renderedImageHeight = image.height * sizeFactor;
    top = ((size.height - renderedImageHeight) / 2);

    renderedImageWidth = image.width * sizeFactor;
    left = ((size.width - renderedImageWidth) / 2);


    final points = [
      Offset(left + topLeft.dx * renderedImageWidth, top + topLeft.dy * renderedImageHeight),
      Offset(left + topRight.dx * renderedImageWidth, top + topRight.dy * renderedImageHeight),
      Offset(left + bottomRight.dx * renderedImageWidth, top + (bottomRight.dy * renderedImageHeight)),
      Offset(left + bottomLeft.dx * renderedImageWidth, top + bottomLeft.dy * renderedImageHeight),
      Offset(left + topLeft.dx * renderedImageWidth, top + topLeft.dy * renderedImageHeight),
    ];

    final paint = Paint()
      ..color = color.withOpacity(0.5)
      ..strokeWidth = 2
      ..strokeCap = StrokeCap.round;

    canvas.drawPoints(ui.PointMode.polygon, points, paint);

    for (Offset point in points) {
      canvas.drawCircle(point, 10, paint);
    }
  }

  @override
  bool shouldRepaint(CustomPainter old) {
    return true;
  }
}

The crucial part is where we determine the renderedImageWidth and renderedImageHeight. Because we make use of BoxFit.contain, we know that the image does not fill the whole screen. Instead, the longest side fits into the screen making it possible for bars to appear along the shorter side. If we don’t do anything, the Painter will draw the edges based on the assumption that the aspect ratio of the rendered image is equal to the one from the original image. We calculate the factor to adjust the dimensions of the painted edges.

Okay now we have a Screen that displays the camera preview image and two buttons, but yet, the buttons do not trigger anything. Let’s change that.

Future _detectEdges(String filePath) async {
  if (!mounted || filePath == null) {
    return;
  }

  setState(() {
    imagePath = filePath;
  });

  EdgeDetectionResult result = await EdgeDetector().detectEdges(filePath);

  setState(() {
    edgeDetectionResult = result;
  });
}

void onTakePictureButtonPressed() async {
  String filePath = await takePicture();

  log('Picture saved to $filePath');

  await _detectEdges(filePath);
}

void _onGalleryButtonPressed() async {
  final picker = ImagePicker();
  final pickedFile = await picker.getImage(source: ImageSource.gallery);
  final filePath = pickedFile.path;

  log('Picture saved to $filePath');

  _detectEdges(filePath);
}

If we have a filePath which means that an image was either taken by the camera or chosen from the gallery, we start the edge detection. The result is then set to the member variable of our widget.

This _detectEdges() method is called both from the callback of the camera button and from the gallery button as soon as the filePath is obtained.

Now what’s missing is the EdgeDetector. This class is responsible for calling the class EdgeDetection from our package that forwards the call to the native code. We need to take care of the UI not being blocked during that call.

import 'dart:async';
import 'dart:isolate';

import 'package:simple_edge_detection/edge_detection.dart';

class EdgeDetector {
  static Future<void> startEdgeDetectionIsolate(EdgeDetectionInput edgeDetectionInput) async {
    EdgeDetectionResult result = await EdgeDetection.detectEdges(edgeDetectionInput.inputPath);
    edgeDetectionInput.sendPort.send(result);
  }

  Future<EdgeDetectionResult> detectEdges(String filePath) async {
    // Creating a port for communication with isolate and arguments for entry point
    final port = ReceivePort();

    // Spawning an isolate
    Isolate.spawn<EdgeDetectionInput>(
      startEdgeDetectionIsolate,
      EdgeDetectionInput(
        inputPath: filePath,
        sendPort: port.sendPort
      ),
      onError: port.sendPort,
      onExit: port.sendPort
    );

    // Making a variable to store a subscription in
    StreamSubscription sub;

    // Listening for messages on port

    var completer = new Completer<EdgeDetectionResult>();

    sub = port.listen((result) async {
      // Cancel a subscription after message received called
      await sub?.cancel();
      completer.complete(await result);
    });

    return completer.future;
  }
}

class EdgeDetectionInput {
  EdgeDetectionInput({
    this.inputPath,
    this.sendPort
  });

  String inputPath;
  SendPort sendPort;
}

For the call to be non-blocking, it’s not sufficient to use Futures. That’s because a Future uses Dart’s event loop to schedule a task for some time in the future. However, if it’s very demanding computation, it’s not sufficient to do that because it shares resources with the other parts of the code. To start a part of the program that is highly independent, we need to go for Isolates.

The spawn() method of the isoalte expects a static method, an argument and a port on which the spawning component can receive errors or the information that the isolate has finished.

Since we can only supply a single argument, we need to wrap our input path and our sendPort in a class. The sendPort is very important as it’s the way of communicating from the isolate back to the caller. We need this to receive the result of our edge detection. When the isolate calls send on the send port, we receive and event on the receive port. That’s why we need to listen to it. We create a Future and by the time the listener on the port receives the result, we complete the future with the received data.

Result

Okay, we’re done. That’s what our final result looks like:

Pretty cool! There are still a lot of things to improve like resizing the image to a maximum size because of possible performance and memory issues. Or using the image stream from the camera to display a live preview of the edge detection. But for now, to show how things can be done, this is it.

An improvement was made afterwards: I added a magnifier that lets you easily position the touch bubbles without the finger covering the area that is crucial. The tutorial can be found here.

The full code can be found here:

Note, that I could not include all the OpenCV builds into the GitHub repository. You need to perform the steps of downloading (like it’s mentioned under project setup) yourself.

If you like what you’ve read, feel free to support me:

🥗Buy me a salad

32 thoughts on “Implementing edge detection in Flutter”

  1. This is a wonderful package. But i need image didn’t be converted to gray, how can i? I tried to turn off COLOR_BGR2GRAY in .cpp but it didn’t work

    Reply
    • Hey there!
      Thank you for your appreciation.
      In fact, you’ll also have to remove two lines below cvtColor: adaptiveThreshold. This makes the image a pure black and white image. The requirement is to have a grayscaled image which is what cvtColor takes care of.
      So just make the function look like this:

      Mat ImageProcessor::process_image(Mat img, float x1, float y1, float x2, float y2, float x3, float y3, float x4, float y4) {
      Mat dst = ImageProcessor::crop_and_transform(img, x1, y1, x2, y2, x3, y3, x4, y4);
      return dst;
      }

      Reply
      • Thank for your reply. I was found that after ask :)) but now i have trouble when build in Ios. I was setup for ios like tutorial said but error is :
        “2. Did not find header ‘opencv.hpp’ in framework ‘opencv2’ (loaded from ‘/Users/mac/Documents/projects/flutter/memori-mobile/ios/.symlinks/plugins/simple_edge_detection/ios’)

        ‘opencv2/opencv.hpp’ file not found”
        i tried to rename the package to simple_edge_detection and it still didn’t work.
        My application locate tree:
        memori_mobile:
        -fpo-simple-edge-detection:
        -your repo
        -lib:
        -my application
        my pubspec.yaml:
        simple_edge_detection:
        path: ./fpo-simple-edge-detection

        Reply
        • Try to create a new directory called “packages” under `‘/Users/mac/Documents/projects/flutter/memori-mobile/packages` (assuming this is the root of your project). Then import the edge detection package inside the `pubspec.yml` of your project with
          “`
          simple_edge_detection:
          path: packages/simple-edge-detection
          “`

          I hope this helps. I think iOS has problems building if the sources are outside of the project root.

          Reply
          • Hey there.
            You have to change the `process_image` function a little bit by removing two function calls – one for grayscaling and one for adaptive threshold.

            Just make the function look like this:

            “`
            Mat ImageProcessor::process_image(Mat img, float x1, float y1, float x2, float y2, float x3, float y3, float x4, float y4) {
            Mat dst = ImageProcessor::crop_and_transform(img, x1, y1, x2, y2, x3, y3, x4, y4);
            return dst;
            }
            “`

            Since this question seems to come up frequently, I will make a code change that enables you to choose whether to make it black and white.

            Does that help?

  2. You can t add Android plugins to Flutter this way. You have to use Flutter plugins which is why you re getting the error Project with path :openCVLibrary343 could not be found in project :flutter_plugin . The post I linked this too not only says OpenCV is not currently available on Flutter but also explains how you can provide your own plugin following this.. Please take a look at it. SnakeyHips Nov 1 ’18 at 16:22

    Reply
    • I’m sorry, I don’t get what you’re trying to say. I’m not getting an error. Also, I don’t add an Android Plugin. I am using the native C++ OpenCV Library. Can you elaborate what you’re trying to say, please? 🙂

      Reply
  3. does this also stretch out the cropped image to full width and height like how the edge_detection plugin did it?

    Reply
    • You mean skewing it so that even if you select a rotated polygon, it will be displayed as a non-rotated rectangle filling as much space as possible in the image preview? Then yes! 🙂

      Reply
  4. typedef WarpFunc = void Function(ffi.Pointer x, ffi.Pointer y,
    double, double, double, double, double, double, double, double);

    typedef _warp_func = ffi.Void Function(ffi.Pointer, ffi.Pointer,
    double, double, double, double, double, double, double, double);

    class EdgeDetection {

    static void warp(String inputPath, String outputPath, double co1, double co2,
    double co3, double co4, double co5, double co6, double co7, double co8) {
    ffi.DynamicLibrary nativeEdgeDetection = _getDynamicLibrary();

    final warp = nativeEdgeDetection
    .lookup<ffi.NativeFunction>(“warp”)
    .asFunction();

    warp(Utf8.toUtf8(inputPath), Utf8.toUtf8(outputPath), co1, co2, co3, co4,
    co5, co6, co7, co8);
    }

    static ffi.DynamicLibrary _getDynamicLibrary() {
    final ffi.DynamicLibrary nativeEdgeDetection = Platform.isAndroid
    ? ffi.DynamicLibrary.open(“libnative_edge_detection.so”)
    : ffi.DynamicLibrary.process();
    return nativeEdgeDetection;
    }
    }

    I did the above code for calling my c files in dart. But when i run my plugin in my app. It states

    Error: Expected type ‘NativeFunction<Void Function(Pointer, Pointer, double, double, double, double, double, double, double, double)>’ to be a valid and instantiated subtype of ‘NativeType’. this error. Can you tell me what is wrong here.

    Reply
    • Hey there, Prathmesh
      I am currently on vacation. I will have a look at it as soon as I come back (which will be at the beginning of November). I hope you have the patience to wait :).
      Cheers!

      Reply
    • The function you expect from the lookup function (_warp_func) needs to have native types. In your case, you used “double” which is a Dart type. In fact, you need to use “Double” (with capital “D”) because the native interface does not know about Dart types.

      Please change the types in the _warp_func like this:

      typedef _warp_func = ffi.Void Function(ffi.Pointer, ffi.Pointer,
      Double, Double, Double, Double, Double, Double, Double, Double);

      and then also change the lookup to:


      final warp = nativeEdgeDetection
      .lookup>("warp")
      .asFunction();

      Hope that helps!

      Reply
  5. Hi Marc, in the template generation, there is no jniLibs/* folder under project_root/android/src/main/ . Is this something me need to create manually?

    Reply
    • Yes, that’s correct! You need to create this directory first if it does not exist in your project. I’m sorry if my explanation in the tutorial is not clear enough regarding that issue.

      Reply
      • Tutorial is very interesting, although I have not implemented it, I am very curious to understand what would be size of package. specially when OpenCV zip files for android and iOS is almost 500MB.

        Reply
        • If you choose to create an AAB (Android App Bundle) instead of an APK, you end up with an app size of about 30 MB on the device if I remember correctly. That’s because the libraries will only be included for the respective architecture and operating system.

          Reply
  6. Hi !

    I’ve tried to add it to my project, it works fine for Android but I can’t make it work on ios simulator..
    I’ve tried everything I could find..

    Here is the error :
    Could not find or use auto-linked library ‘swiftAVFoundation’
    Undefined symbols for architecture x86_64:
    “__swift_FORCE_LOAD_$_swiftAVFoundation”, referenced from:
    __swift_FORCE_LOAD_$_swiftAVFoundation_$_opencv2 in opencv2(ByteVectorExt.o)
    __swift_FORCE_LOAD_$_swiftAVFoundation_$_opencv2 in opencv2(DoubleVectorExt.o)
    __swift_FORCE_LOAD_$_swiftAVFoundation_$_opencv2 in opencv2(FloatVectorExt.o)
    __swift_FORCE_LOAD_$_swiftAVFoundation_$_opencv2 in opencv2(IntVectorExt.o)
    __swift_FORCE_LOAD_$_swiftAVFoundation_$_opencv2 in opencv2(MatExt.o)
    __swift_FORCE_LOAD_$_swiftAVFoundation_$_opencv2 in opencv2(CvTypeExt.o)
    (maybe you meant: __swift_FORCE_LOAD_$_swiftAVFoundation_$_opencv2)
    ld: symbol(s) not found for architecture x86_64

    I’ve read that I had to statically link the c++ sources files to the xcode project inside the Runner.xcworkspace but I have no idea how to do it properly, do you have any idea ?

    My project tree is the following :
    mobileapp:
    – android
    – ios
    – lib
    – simple_edge_detection:
    – android
    – include
    – lib
    – ios:
    – Classes
    – opencv2.framework

    Reply
  7. Hi Marc,

    Are you planning on using the image stream from the camera to display a live preview of the edge detection? How difficult do you think this will be and how long do you think it would take?

    Reply
    • Good question. Because of the YUV420 (YCbCr) to RGB conversion, the task is not that trivial, but it sounds like something I could add as a feature in the future! Thanks for the suggestions.

      Reply
      • I am actually working on that atm, but as you said it is not trivial. performance so far is horrible and the image Stream gets in the way of taking an actual picture. i am wondering if it might be more efficient to write ones own camera preview and work with Texture, similar to how the original CameraPreview widget works.

        Reply
  8. I copied all the code from this page, and got it running on Android. However, somehow I always get four coordinates back:
    0,0 0,1 1,0,1,1. So basically the whole screen I guess. I verified by adding some log messages that the image is properly read in C++. So the issue seems the detector. I basically have no idea where to look further. 🙂

    Reply
    • Hey Peter,

      thank you for your interest in this tutorial.
      Does that happen to every image you try? The edge detection algorithm is not perfect. It works best when there is a high contrast between the shape that is to be detected and the background.
      If that doesn’t help, you might want to try to fork this: https://github.com/flutter-clutter/flutter-simple-edge-detection try if it works. If it does then compare it to your code or use the fork anyways.

      Cheers!

      Reply

Leave a Comment